Another Fusion White Elephant Sighted in Germany

According to an article that just appeared in Science magazine, scientists in Germany have completed building a stellarator by the name of Wendelstein 7-X (W7-X), and are seeking regulatory permission to turn the facility on in November.  If you can’t get past the Science paywall, here’s an article in the popular media with some links.  Like the much bigger ITER facility now under construction at Cadarache in France, W7-X is a magnetic fusion device.  In other words, its goal is to confine a plasma of heavy hydrogen isotopes at temperatures much hotter than the center of the sun with powerful magnetic fields in order to get them to fuse, releasing energy in the process.  There are significant differences between stellarators and the tokamak design used for ITER, but in both approaches the idea is to hold the plasma in place long enough to get significantly more fusion energy out than was necessary to confine and heat the plasma.  Both approaches are probably scientifically feasible.  Both are also white elephants, and a waste of scarce research dollars.

The problem is that both designs have an Achilles heel.  Its name is tritium.  Tritium is a heavy isotope of hydrogen with a nucleus containing a proton and two neutrons instead of the usual lone proton.  Fusion reactions between tritium and deuterium, another heavy isotope of hydrogen with a single neutron in addition to the usual proton, begin to occur fast enough to be attractive as an energy source at plasma temperatures and densities much less than would be necessary for any alternative reaction.  The deuterium-tritium, or DT, reaction will remain the only feasible one for both stellarator and tokamak fusion reactors for the foreseeable future.  Unfortunately, tritium occurs in nature in only tiny trace amounts.

The question is, then, where do you get the tritium fuel to keep the fusion reactions going?  Well, in addition to a helium nucleus, the DT fusion reaction produces a fast neutron.  These can react with lithium to produce tritium.  If a lithium-containing blanket could be built surrounding the reaction chamber in such a way as to avoid interfering with the magnetic fields, and yet thick enough and close enough to capture enough of the neutrons, then it should be possible to generate enough tritium to replace that burned up in the fusion process.  It sounds complicated but, again, it appears to be at least scientifically feasible.  However, it is by no means as certain that it is economically feasible.

Consider what we’re dealing with here.  Tritium is an extremely slippery material that can pass right through walls of some types of metal.  It is also highly radioactive, with a half-life of about 12.3 years.  It will be necessary to find some way to efficiently extract it from the lithium blanket, allowing none of it to leak into the surrounding environment.  If any of it gets away, it will be easily detectable.  The neighbors are sure to complain and, probably, lawyer up.  Again, all this might be doable.  The problem is that it will never be doable at a low enough cost to make fusion reactor designs based on these approaches even remotely economically competitive with the non-fossil alternative sources of energy that will be available for, at the very least, the next several centuries.

What’s that?  Reactor design studies by large and prestigious universities and corporations have all come to the conclusion that these magnetic fusion beasts will be able to produce electricity at least as cheaply as the competition?  I don’t think so.  I’ve participated in just such a government-funded study, conducted by a major corporation as prime contractor, with several other prominent universities and corporations participating as subcontractors.  I’m familiar with the methodology used in several others.  In general, it’s possible to make the cost electricity come out at whatever figure you choose, within reason, using the most approved methods and the most sound project management and financial software.  If the government is funding the work, it can be safely assumed that they don’t want to hear something like, “Fuggedaboudit, this thing will be way too expensive to build and run.”  That would make the office that funded the work look silly, and the fusion researchers involved in the design look like welfare queens in white coats.  The “right” cost numbers will always come out of these studies in the end.

I submit that a better way to come up with a cost estimate is to use a little common sense.  Do you really think that a commercial power company will be able to master the intricacies of tritium production and extraction from the vicinity of a highly radioactive reaction chamber at anywhere near the cost of, say, wind and solar combined with next generation nuclear reactors for baseload power?  If you do, you’re a great deal more optimistic than me.  W7-X cost a billion euros.  ITER is slated to cost 13 billion, and will likely come in at well over that.  With research money hard to come by in Europe for much worthier projects, throwing amounts like that down a rat hole doesn’t seem like a good plan.

All this may come as a disappointment to fusion enthusiasts.  On the other hand, you may want to consider the fact that, if fusion had been easy, we would probably have managed to blow ourselves up with pure fusion weapons by now.  Beyond that, you never know when some obscure genius might succeed in pulling a rabbit out of their hat in the form of some novel confinement scheme.  Several companies claim they have sure-fire approaches that are so good they will be able to dispense with tritium entirely in favor of more plentiful, naturally occurring isotopes.  See, for example, here, here, and here, and the summary at the Next Big Future website.  I’m not optimistic about any of them, either, but you never know.

Stellarator

No Ignition at the National Ignition Facility: A Post Mortem

The National Ignition Facility, or NIF, at Lawrence Livermore National Laboratory (LLNL) in California was designed and built, as its name implies, to achieve fusion ignition.  The first experimental campaign intended to achieve that goal, the National Ignition Campaign, or NIC, ended in failure.  Scientists at LLNL recently published a paper in the journal Physics of Plasmas outlining, to the best of their knowledge to date, why the experiments failed.  Entitled “Radiation hydrodynamics modeling of the highest compression inertial confinement fusion ignition experiment from the National Ignition Campaign,” the paper concedes that,

The recently completed National Ignition Campaign (NIC) on the National Ignition Facility (NIF) showed significant discrepancies between post-shot simulations of implosion performance and experimentally measured performance, particularly in thermonuclear yield.

To understand what went wrong, it’s necessary to know some facts about the fusion process and the nature of scientific attempts to achieve fusion in the laboratory.  Here’s the short version:  The neutrons and protons in an atomic nucleus are held together by the strong force, which is about 100 times stronger than the electromagnetic force, and operates only over tiny distances measured in femtometers.  The average binding energy per nucleon (proton or neutron) due to the strong force is greatest for the elements in the middle of the periodic table, and gradually decreases in the directions of both the lighter and heavier elements.  That’s why energy is released by fissioning heavy atoms like uranium into lighter atoms, or fusing light atoms like hydrogen into heavier atoms.  Fusion of light elements isn’t easy.  Before the strong force that holds atomic nuclei together can take effect, two light nuclei must be brought very close to each other.  However, atomic nuclei are all positively charged, and like charges repel.  The closer they get, the stronger the repulsion becomes.  The sun solves the problem with its crushing gravitational force.  On earth, the energy of fission can also provide the necessary force in nuclear weapons.  However, concentrating enough energy to accomplish the same thing in the laboratory has proved a great deal more difficult.

The problem is to confine incredibly hot material at sufficiently high densities for a long enough time for significant fusion to take place.  At the moment there are two mainstream approaches to solving it:  magnetic fusion and inertial confinement fusion, or ICF.  In the former, confinement is achieved with powerful magnetic lines of force.  That’s the approach at the international ITER fusion reactor project currently under construction in France.  In ICF, the idea is to first implode a small target of fuel material to extremely high density, and then heat it to the necessary high temperature so quickly that its own inertia holds it in place long enough for fusion to happen.  That’s the approach being pursued at the NIF.

The NIF consists of 192 powerful laser beams, which can concentrate about 1.8 megajoules of light on a tiny spot, delivering all that energy in a time of only a few nanoseconds.  It is much larger than the next biggest similar facility, the OMEGA laser system at the Laboratory for Laser Energetics in Rochester, NY, which maxes out at about 40 kilojoules.  The NIC experiments were indirect drive experiments, meaning that the lasers weren’t aimed directly at the BB-sized, spherical target, or “capsule,” containing the fuel material (a mixture of deuterium and tritium, two heavy isotopes of hydrogen).  Instead, the target was mounted inside of a tiny, cylindrical enclosure known as a hohlraum with the aid of a thin, plastic “tent.”  The lasers were fired through holes on each end of the hohlraum, striking the walls of the cylinder, generating a pulse of x-rays.  These x-rays then struck the target, ablating material from its surface at high speed.  In a manner similar to a rocket exhaust, this drove the remaining target material inward, causing it to implode to extremely high densities, about 40 times heavier than the heaviest naturally occurring elements.  As it implodes, the material must be kept as “cold” as possible, because it’s easier to squeeze and compress things that are cold than those that are hot.  However, when it reaches maximum density, a way must be found to heat a small fraction of this “cold” material to the very high temperatures needed for significant fusion to occur.  This is accomplished by setting off a series of shocks during the implosion process that converge at the center of the target at just the right time, generating the necessary “hot spot.”  The resulting fusion reactions release highly energetic alpha particles, which spread out into the surrounding “cold” material, heating it and causing it to fuse as well, in a “burn wave” that propagates outward.  “Ignition” occurs when the amount of fusion energy released in this way is equal to the energy in the laser beams that drove the target.

As noted above, things didn’t go as planned.  The actual fusion yield achieved in the best experiment was less than that predicted by the best radiation hydrodynamics computer codes available at the time by a factor of about 50, give or take.  The LLNL paper in Physics of Plasmas discusses some of the reasons for this, and describes subsequent improvements to the codes that account for some, but not all, of the experimental discrepancies.  According to the paper,

Since these simulation studies were completed, experiments have continued on NIF and have identified several important effects – absent in the previous simulations – that have the potential to resolve at least some of the large discrepancies between simulated and experimental yields.  Briefly, these effects include larger than anticipated low-mode distortions of the imploded core – due primarily to asymmetries in the x-ray flux incident on the capsule, – a larger than anticipated perturbation to the implosion caused by the thin plastic membrane or “tent” used to support the capsule in the hohlraum prior to the shot, and the presence, in some cases, of larger than expected amounts of ablator material mixed into the hot spot.

In a later section, the LLNL scientists also note,

Since this study was undertaken, some evidence has also arisen suggesting an additional perturbation source other than the three specifically considered here.  That is, larger than anticipated fuel pre-heat due to energetic electrons produced from laser-plasma interactions in the hohlraum.

In simple terms, the first of these passages means that the implosions weren’t symmetric enough, and the second means that the fuel may not have been “cold” enough during the implosion process.  Any variation from perfectly spherical symmetry during the implosion can rob energy from the central hot spot, allow material to escape before fusion can occur, mix cold fuel material into the hot spot, quenching it, etc., potentially causing the experiment to fail.  The asymmetries in the x-ray flux mentioned in the paper mean that the target surface would have been pushed harder in some places than in others, resulting in asymmetries to the implosion itself.  A larger than anticipated perturbation due to the “tent” would have seeded instabilities, such as the Rayleigh-Taylor instability.  Imagine holding a straw filled with water upside down.  Atmospheric pressure will prevent the water from running out.  Now imagine filling a perfectly cylindrical bucket with water to the same depth.  If you hold it upside down, the atmospheric pressure over the surface of the water is the same.  Based on the straw experiment, the water should stay in the bucket, just as it did in the straw.  Nevertheless, the water comes pouring out.  As they say in the physics business, the straw experiment doesn’t “scale.”  The reason for this anomaly is the Rayleigh-Taylor instability.  Over such a large surface, small variations from perfect smoothness are gradually amplified, growing to the point that the surface becomes “unstable,” and the water comes splashing out.  Another, related instability, the Richtmeyer-Meshkov instability, leads to similar results in material where shocks are present, as in the NIF experiments.

Now, with the benefit of hindsight, it’s interesting to look back at some of the events leading up to the decision to build the NIF.  At the time, government used a “key decision” process to approve major proposed projects.  The first key decision, known as Key Decision 0, or KD0, was approval to go forward with conceptual design.  The second was KD1, approval of engineering design and acquisition.  There were more “key decisions” in the process, but after passing KD1, it could safely be assumed that most projects were “in the bag.”  In the early 90’s, a federal advisory committee, known as the Inertial Confinement Fusion Advisory Committee, or ICFAC, had been formed to advise the responsible agency, the Department of Energy (DOE), on matters relating to the national ICF program.  Among other things, its mandate including advising the government on whether it should proceed with key decisions on the NIF project.  The Committee’s advice was normally followed by DOE.

At the time, there were six major “program elements” in the national ICF program.  These included the three weapons laboratories, LLNL, Los Alamos National Laboratory (LANL), and Sandia National Laboratories (SNL).  The remaining three included the Laboratory for Laser Energetics at the University of Rochester (UR/LLE), the Naval Research Laboratory (NRL), and General Atomics (GA).  Spokespersons from all these “program elements” appeared before the ICFAC at a series of meetings in the early 90’s.  The critical meeting as far as approval of the decision to pass through KD1 is concerned took place in May 1994.  Prior to that time, extensive experimental programs at LLNL’s Nova laser, UR/LLE’s OMEGA, and a host of other facilities had been conducted to address potential uncertainties concerning whether the NIF could achieve ignition.  The best computer codes available at the time had modeled proposed ignition targets, and predicted that several different designs would ignite, typically producing “gains,” the ratio of the fusion energy out to the laser energy in, of from 1 to 10.  There was just one major fly in the ointment – a brilliant physicist named Steve Bodner, who directed the ICF program at NRL at the time.

Bodner told the ICFAC that the chances of achieving ignition on the NIF were minimal, providing his reasons in the form of a detailed physics analysis.  Among other things, he noted that there was no way of controlling the symmetry because of blow-off of material from the hohlraum wall, which could absorb both laser light and x-rays.  Ablated material from the capsule itself could also absorb laser and x-ray radiation, again destroying symmetry.  He pointed out that codes had raised the possibility of pressure perturbations on the capsule surface due to stagnation of the blow-off material on the hohlraum axis.  LLNL’s response was that these problems could be successfully addressed by filling the hohlraum with a gas such as helium, which would hold back the blow-off from the walls and target.  Bodner replied that such “solutions” had never really been tested because of the inability to do experiments on Nova with sufficient pulse length.  In other words, it was impossible to conduct experiments that would “scale” to the NIF on existing facilities.  In building the NIF, we might be passing from the “straw” to the “bucket.”  He noted several other areas of major uncertainty with NIF-scale targets, such as the possibility of unaccounted for reflection of the laser light, and the possibility of major perturbations due to so-called laser-plasma instabilities.

In light of these uncertainties, Bodner suggested delaying approval of KD1 for a year or two until these issues could be more carefully studied.  At that point, we may have gained the technological confidence to proceed.  However, I suspect he knew that two years would never be enough to resolve the issues he had raised.  What Bodner really wanted to do was build a much larger facility, known as the Laboratory Microfusion Facility, or LMF.  The LMF would have a driver energy of from 5 to 10 megajoules compared to the NIF’s 1.8.  It had been seriously discussed in the late 80’s and early 90’s.  Potentially, such a facility could be built with Bodner’s favored KrF laser drivers, the kind used on the Nike laser system at NRL, instead of the glass lasers that had been chosen for NIF.  It would be powerful enough to erase the physics uncertainties he had raised by “brute force.”  Bodner’s proposed approach was plausible and reasonable.  It was also a forlorn hope.

Funding for the ICF program had been cut in the early 90’s.  Chances of gaining approval for a beast as expensive as LMF were minimal.  As a result, it was now officially considered a “follow-on” facility to the NIF.  No one took this seriously at the time.  Everyone knew that, if NIF failed, there would be no “follow-on.”  Bodner knew this, the scientists at the other program elements knew it, and so did the members of the ICFAC.  The ICFAC was composed of brilliant scientists.  However, none of them had any real insight into the guts of the computer codes that were predicting ignition on the NIF.  Still, they had to choose between the results of the big codes, and Bodner’s physical insight bolstered by what were, in comparison, “back of the envelope” calculations.  They chose the big codes.  With the exception of Tim Coffey, then Director of NRL, they voted to approve passing through KD1 at the May meeting.

In retrospect, Bodner’s objections seem prophetic.  The NIC has failed, and he was not far off the mark concerning the reasons for the failure.  It’s easy to construe the whole affair as a morality tale, with Bodner playing the role of neglected Cassandra, and the LLNL scientists villains whose overweening technological hubris finally collided with the grim realities of physics.  Things aren’t that simple.  The LLNL people, not to mention the supporters of NIF from the other program elements, included many responsible and brilliant scientists.  They were not as pessimistic as Bodner, but none of them was 100% positive that the NIF would succeed.  They decided the risk was warranted, and they may well yet prove to be right.

In the first place, as noted above, chances that an LMF might be substituted for the NIF after another year or two of study were very slim.  The funding just wasn’t there.  Indeed, the number of laser beams on the NIF itself had been reduced from the originally proposed 240 to 192, at least in part, for that very reason.  It was basically a question of the NIF or nothing.  Studying the problem to death, now such a typical feature of the culture at our national research laboratories, would have led nowhere.  The NIF was never conceived as an energy project, although many scientists preferred to see it in that light.  Rather, it was built to serve the national nuclear weapons program.  It’s supporters were aware that it would be of great value to that program even if it didn’t achieve ignition.  In fact, it is, and is now providing us with a technological advantage that rival nuclear powers can’t match in this post-testing era.  Furthermore, LLNL and the other weapons laboratories were up against another problem – what you might call a demographic cliff.  The old, testing-era weapons designers were getting decidedly long in the tooth, and it was necessary to find some way to attract new talent.  A facility like the NIF, capable of exploring issues in inertial fusion energy, astrophysics, and other non-weapons-related areas of high energy density physics, would certainly help address that problem as well.

Finally, the results of the NIC in no way “proved” that ignition on the NIF is impossible.  There are alternatives to the current indirect drive approach with frequency-tripled “blue” laser beams.  Much more energy, up to around 4 megajoules, might be available if the known problems of using longer wavelength “green” light can be solved.  Thanks to theoretical and experimental work done by the ICF team at UR/LLE under the leadership of Dr. Robert McCrory, the possibility of direct drive experiments on the NIF, hitting the target directly instead of shooting the laser beams into a “hohlraum” can, was also left open, using a so-called “polar” illumination approach.  Another possibility is the “fast ignitor” approach to ICF, which would dispense with the need for complicated converging shocks to produce a central “hot spot.”  Instead, once the target had achieved maximum density, the hot spot would be created on the outer surface using a separate driver beam.

In other words, while the results of the NIC are disappointing, stay tuned.  Pace Dr. Bodner, the scientists at LLNL may yet pull a rabbit out of their hats.

ICF

Oswald Spengler got it Wrong

Sometimes the best metrics for public intellectuals are the short articles they write for magazines.  There are page limits, so they have to get to the point.  It isn’t as easy to camouflage vacuous ideas behind a smoke screen of verbiage.  Take, for example, the case of Oswald Spengler.  His “Decline of the West” was hailed as the inspired work of a prophet in the years following its publication in 1918.  Read Spengler’s Wiki entry and you’ll see what I mean.  He should have quit while he was ahead.

Fast forward to 1932, and the Great Depression was at its peak.  The Decline of the West appeared to be a fait accompli.  Spengler would have been well-advised to rest on his laurels.  Instead, he wrote an article for The American Mercury, still edited at the time by the Sage of Baltimore, H. L. Mencken, with the reassuring title, “Our Backs are to the Wall!”  It was a fine synopsis of the themes Spengler had been harping on for years, and a prophecy of doom worthy of Jeremiah himself.  It was also wrong.

According to Spengler, high technology carried within itself the seeds of its own collapse.  Man had dared to “revolt against nature.”  Now the very machines he had created in the process were revolting against man.  At the time he wrote the article he summed up the existing situation as follows:

A group of nations of Nordic blood under the leadership of British, German, French, and Americans command the situation.  Their political power depends on their wealth, and their wealth consists in their industrial strength.  But this in turn is bound up with the existence of coal.  The Germanic peoples, in particular, are secured by what is almost a monopoly of the known coalfields…

Spengler went on to explain that,

Countries industrially poor are poor all around; they cannot support an army or wage a war; therefore they are politically impotent; and the workers in them, leaders and led alike, are objects in the economic policy of their opponents.

No doubt he would have altered this passage somewhat had he been around to witness the subsequent history of places like Vietnam, Algeria, and Cambodia.  Willpower, ideology, and military genius have trumped political and economic power throughout history.  Spengler simply assumed they would be ineffective against modern technology because the “Nordic” powers had not been seriously challenged in the 50 years before he wrote his book.  It was a rash assumption.  Even more rash were his assumptions about the early demise of modern technology.  He “saw” things happening in his own times that weren’t really happening at all.  For example,

The machine, by its multiplication and its refinement, is in the end defeating its own purpose.  In the great cities the motor-car has by its numbers destroyed its own value, and one gets on quicker on foot.  In Argentina, Java, and elsewhere the simple horse-plough of the small cultivator has shown itself economically superior to the big motor implement, and is driving the latter out.  Already, in many tropical regions, the black or brown man with his primitive ways of working is a dangerous competitor to the modern plantation-technic of the white.

Unfortunately, motor cars and tractors can’t read, so went right on multiplying without paying any attention to Spengler’s book.  At least he wasn’t naïve enough to believe that modern technology would end because of the exhaustion of the coalfields.  He knew that we were quite clever enough to come up with alternatives.  However, in making that very assertion, he stumbled into what was perhaps the most fundamental of all his false predictions; the imminence of the “collapse of the West.”

It is, of course, nonsense to talk, as it was fashionable to do in the Nineteenth Century, of the imminent exhaustion of the coal-fields within a few centuries and of the consequences thereof – here, too, the materialistic age could not but think materially.  Quite apart from the actual saving of coal by the substitution of petroleum and water-power, technical thought would not fail ere long to discover and open up still other and quite different sources of power.  It is not worth while thinking ahead so far in time.  For the west-European-American technology will itself have ended by then.  No stupid trifle like the absence of material would be able to hold up this gigantic evolution.

Alas, “so far in time” came embarrassingly fast, with the discovery of nuclear fission a mere six years later.  Be that as it may, among the reasons that this “gigantic evolution” was unstoppable was what Spengler referred to as “treason to technics.”  As he put it,

Today more or less everywhere – in the Far East, India, South America, South Africa – industrial regions are in being, or coming into being, which, owing to their low scales of wages, will face us with a deadly competition.  the unassailable privileges of the white races have been thrown away, squandered, betrayed.

In other words, the “treason” consisted of the white race failing to keep its secrets to itself, but bestowing them on the brown and black races.  They, however, were only interested in using this technology against the original creators of the “Faustian” civilization of the West.  Once the whites were defeated, they would have no further interest in it:

For the colored races, on the contrary, it is but a weapon in their fight against the Faustian civilization, a weapon like a tree from the woods that one uses as scaffolding, but discards as soon as it has served its purpose.  This machine-technic will end with the Faustian civilization and one day will lie in fragments, forgotten – our railways and steamships as dead as the Roman roads and the Chinese wall, our giant cities and skyscrapers in ruins, like old Memphis and Babylon.  The history of this technic is fast drawing to its inevitable close.  It will be eaten up from within.  When, and in what fashion, we so far know not.

Spengler was wise to include the Biblical caveat that, “…about that day or hour no one knows, not even the angels in heaven, nor the Son, but only the Father”  (Matthew 24:36).  However, he had too much the spirit of the “end time” Millennialists who have cropped up like clockwork every few decades for the last 2000 years, predicting the imminent end of the world, to leave it at that.  Like so many other would-be prophets, his predictions were distorted by a grossly exaggerated estimate of the significance of the events of his own time.  Christians, for example, have commonly assumed that reports of war, famine and pestilence in their own time are somehow qualitatively different from the war, famine and pestilence that have been a fixture of our history for that last 2000 years, and conclude that they are witnessing the signs of the end times, when, “…nation shall rise against nation, and kingdom against kingdom: and there shall be famines, and pestilences, and earthquakes, in divers places” (Matthew 24:7).  In Spengler’s case, the “sign” was the Great Depression, which was at its climax when he wrote the article:

The center of gravity of production is steadily shifting away from them, especially since even the respect of the colored races for the white has been ended by the World War.  This is the real and final basis of the unemployment that prevails in the white countries.  It is no mere crisis, but the beginning of a catastrophe.

Of course, Marxism was in high fashion in 1932 as well.  Spengler tosses it in for good measure, agreeing with Marx on the inevitability of revolution, but not on its outcome:

This world-wide mutiny threatens to put an end to the possibility of technical economic work.  The leaders (bourgeoisie, ed.) may take to flight, but the led (proletariat, ed.) are lost.  Their numbers are their death.

Spengler concludes with some advice, not for us, or our parents, or our grandparents, but our great-grandparents generation:

Only dreamers believe that there is a way out.  Optimism is cowardice… Our duty is to hold on to the lost position, without hope, without rescue, like that Roman soldier whose bones were found in front of a door in Pompeii, who, during the eruption of Vesuvius, died at his post because they forgot to relieve him.  That is greatness.  That is what it means to be a thoroughbred.  The honorable end is the one thing that can not be taken from a man.

One must be grateful that later generations of cowardly optimists donned their rose-colored glasses in spite of Spengler, went right on using cars, tractors, and other mechanical abominations, and created a world in which yet later generations of Jeremiahs could regale us with updated predictions of the end of the world.  And who can blame them?  After all, eventually, at some “day or hour no one knows, not even the angels in heaven,” they are bound to get it right, if only because our sun decides to supernova.  When that happens, those who are still around are bound to dust off their ancient history books, smile knowingly, and say, “See, Spengler was right after all!”

Steven Pinker, Science, and “Scientism”

In an article that appeared recently in The New Republic entitled, “Science is not Your Enemy,” Steven Pinker is ostensibly defending science, going so far as to embrace “scientism.”  As he points out, “The term “scientism” is anything but clear, more of a boo-word than a label for any coherent doctrine.”  That’s quite true, which is reason enough to be somewhat circumspect about self-identifying (if I may coin a term) as a “scientismist.”  Nothing daunted, Pinker does just that, defending scientism in “the good sense.” He informs us that “good scientism” is distinguished by “an explicit commitment to two ideals,” namely, the propositions that the world is intelligible, and that the acquisition of knowledge is hard.

Let me say up front that I am on Pinker’s side when it comes to the defense of what he calls “science,” just as I am on his side in rejecting the ideology of the Blank Slate.  Certainly he’s worthy of a certain respect, if only in view of the sort of people who have been coming out of the woodwork to attack him for his latest.  Anyone with enemies like that can’t be all bad.  It’s just that, whenever I read his stuff, I find myself rolling my eyes before long.  Consider, for example, his tome about the Blank Slate.  My paperback version runs to 500 pages give or take, and in all that prose, I find only a single mention of Robert Ardrey, and then only accompanied by the claim that he was “totally and utterly wrong.”  Now, by the account of the Blank Slaters themselves (see, in particular, the essays by Geoffrey Gorer in Man and Aggression, edited by Ashley Montagu), Robert Ardrey was their most effective and influential opponent.  In other words, Pinker wrote a thick tome, purporting to be an account of the Blank Slate, in which he practically ignored the contributions of the most important player in the whole affair, only mentioning him at all in order to declare him wrong, when in fact he was on the same side of the issue as Pinker.

Similar problems turn up in Pinker’s latest.  For example, he writes,

Just as common, and as historically illiterate, is the blaming of science for political movements with a pseudoscientific patina, particularly Social Darwinism and eugenics.  Social Darwinism was the misnamed laissez-faire philosophy of Herbert Spencer.  It was inspired not by Darwin’s theory of natural selection, but by Spencer’s Victorian-era conception of a mysterious natural force for progress, which was best left unimpeded.

Here, as in numerous similar cases, it is clear Pinker has never bothered to read Spencer.  The claim that he was a “Social Darwinist” was a red herring tossed out by his enemies after he was dead.  Based on a minimal fair reading of his work, the claim is nonsense.  If actually reading Spencer is too tedious, just Google something like “Spencer Social Darwinism.”  Check a few of the hits, and you will find that a good number of modern scholars have been fair-minded enough to actively dispute the claim.  Other than that, you will find no reference to specific writings of Spencer in which he promotes Social Darwinism as it is generally understood.  The same could be said of the laissez faire claim.  Spencer supported a small state, but hardly rejected stated intervention in all cases.  He was a supporter of labor unions, and even was of the opinion that land should be held in common in his earlier writings.  As for “Victorian-era” conceptions, if memory serves, Darwin wrote during that era as well, and while Spencer embraced Lamarckism and had a less than up-to-date notion of how evolution works, I find no reference in any of his work to a “mysterious natural force for progress.”

Pinker’s comments about morality are similarly clouded.  He writes,

In other words, the worldview that guides the moral and spiritual values of an educated person today is the worldview given to us by science… The facts of science, by exposing the absence of purpose in the laws governing the universe, force us to take responsibility for the welfare of ourselves, our species, and our planet.  For the same reason, they undercut any moral or political system based on mystical forces, quests, destinies, dialectics, struggles, or messianic ages.  And in combination with a few unexceptionable convictions – that all of us value our own welfare and that we are social beings who impinge on each other and can negotiate codes of conduct – the scientific facts militate toward a defensible morality, namely, adhering to principles that maximize the flourishing of humans and other sentient beings.

In other words, Pinker has bought in to the Sam Harris “human flourishing” mumbo-jumbo, and thinks that the “facts of science” can somehow become material objects with the power to dictate that which is “good” and that which is “evil.”  Here Pinker, in company with Harris, has taken leave of his senses.  Based on what he wrote earlier in the essay, we know that he is aware that what we understand as morality is the expression of evolved behavioral traits.  Those traits are their ultimate cause, and without them morality would literally cease to exist as we know it.  They exist in the brains of individuals, solely by virtue of the fact that, at some point in the distant past utterly unlike the present, they promoted our survival.  And yet, in spite of the fact that Pinker must understand, at least at some level, that these things are true, he agrees with Harris that the emotional responses, or, as Hume, whom Pinker also claims to have read, puts it, sentiments, can jump out of our heads, become objects, or things in themselves, independent of the minds of individuals, and, as such, can be manipulated and specified by the “facts of science.”  Presumably, once the “educated” and the “scientists” have agreed on what the “facts of science” tell us is a “defensible morality,” at that point the rest of us become bound to agree with them on the meanings of “good” and “evil” that they pass down to us, must subordinate our own emotions and inherent predispositions regarding such matters to “science,” and presumably be justifiably (by “science”) punished if we do not.  What nonsense!

“Science” is not an object, any more than “good” and “evil.”  “Science” cannot independently “say” anything, nor can it create values.  In reality, “science” is a rather vague set of principles and prescriptions for approaching the truth, applied willy-nilly if at all by most “scientists.”  By even embracing the use of the term “science” in that way, Pinker is playing into the hands of his enemies.  He is validating their claim that “science” is actually a thing, but in their case, a bête noire, transcending its real nature as a set of rules, more or less vaguely understood and applied, to become an object in itself.  Once the existence of such a “science” object is accepted, it becomes a mere bagatelle to fix on it the responsibility for all the evils of the world, or, in the case of the Pinkers of the world, all the good.

In reality, the issue here is not whether this imaginary “science” object exists and, assuming it does, whether it is “good” or “evil.”  It is about whether we should be empowered to learn things about the universe in which we live or not.  The opponents of “scientism” typically rail against such things as eugenics, Social Darwinism, and the atomic bomb.  These are supposedly the creations of the “science” object.  But, in fact, they are no such thing.  In the case of eugenics and Social Darwinism, they represent the moral choices of individuals.  In the case of the atomic bomb, we have a thing which became possible as a result of the knowledge of the physical world acquired in the preceding half a century, give or take.  What would the opponents of “scientism” have us change?  The decision to build the atomic bomb?  Fine, but in that case they are not opposing “science,” but rather a choice made by individuals.  Opposition to “science” itself can only reasonably be construed as opposition to the acquisition of the knowledge that made the bomb possible to begin with.  If that is what the opponents of “scientism” really mean, let them put their cards on the table.  Let them explain to us in just what ways those things which the rest of us are to be allowed to know will be limited, and just why it is they think they have the right to dictate to the rest of us what we can know and what we can’t.

It seems to me this whole “science” thing is getting out of hand.  If we must have an object, it would be much better for us to go back to the Enlightenment and use the term “reason.”  It seems to me that would make it a great deal more clear what we are talking about.  It would reveal the true nature of the debate.  It is not about the “science” object, and whether it is “good” or “evil,” but about whether we should actually try to use our rational minds, or instead relegate our brains to the less ambitious task of serving as a convenient stuffing for our skulls.

Of Cold Fusion and the Timidity of ARPA-E

ARPA-E, or the Advanced Research Projects Agency – Energy, is supposed to be DOE’s version of DARPA.  According to its website, its mission,

…is to fund projects that will develop transformational technologies that reduce America’s dependence on foreign energy imports; reduce U.S. energy related emissions (including greenhouse gasses); improve energy efficiency across all sectors of the U.S. economy and ensure that the U.S. maintains its leadership in developing and deploying advanced energy technologies.

So far, it has not come up with anything quite as “transformational” as the Internet or stealth technology.  There is good reason for this.  Its source selection people are decidedly weak in the knees.  Consider the sort of stuff it’s funded in the latest round of contract awards.  The people at DARPA would probably call it “workman like.”  H. L. Mencken, the great Sage of Baltimore, would more likely have called it “pure fla fla.”  For example, there are “transformational” systems to twiddle with natural gas storage that the industry, not exactly short of cash at the moment, would have been better left to develop on its own, such as,

Liquid-Piston Isothermal Home Natural Gas Compressor

Chilled Natural Gas At-Home Refueling

and,

Superplastic-Formed Gas Storage Tanks

There is the “transformational” university research that is eye-glazingly mundane, and best reserved as filler for the pages of obscure academic journals, such as,

Cell-level Power Management of Large Battery Packs

Health Management System for Reconfigurable Battery Packs

and,

Optimal Operation and Management of Batteries Based on  Real Time Predictive Modeling and Adaptive Battery  Management Techniques.

There is some “groundbreaking” stuff under the rubric of “build a better magnet, and the world will beat a pathway to your door.”

Manganese-Based Permanent Magnet with 40 MGOe at 200°C

Rare‐Earth‐Free Permanent Magnets for Electrical Vehicle Motors and Wind Turbine Generators: Hexagonal Symmetry Based Materials Systems Mn‐Bi and M‐type Hexaferrite

and,

Discovery and Design of Novel Permanent Magnets using Non-strategic Elements having Secure Supply Chains

…and so on. Far be it for me to claim that any of this research is useless.  It is, however, also what the people at DARPA would call “incremental,” rather than transformational.  Of course, truly transformational ideas don’t grow on trees, and DARPA also funds its share of “workmanlike” projects, but at least the source selection people there occasionally go out on a limb. In the work funded by ARPA-E, on the other hand, I can find nothing that might induce the bureaucrats on Secretary Chu’s staff to swallow their gum.

If the agency is really serious about fulfilling its mission, it might consider some of the innovative ideas out there for harnessing fusion energy.  All of them can be described as “high risk, high payoff,” but isn’t that the kind of work ARPA-E is supposed to be funding?  According to a recent article on the Science Magazine website, the White House has proposed cutting domestic fusion research by 16%to help pay for the U.S. contribution to the international fusion experiment, ITER, under construction in Cadarache, France.  As I’ve pointed out elsewhere, ITER is second only to the International Space Station as the greatest white elephant of all time, and is similarly vacuuming up funds that might otherwise have supported worthwhile research in several other countries.  All the more reason to give a leg up to fusion, a technology that has bedeviled scientists for decades, but that could potentially supply mankind’s energy needs for millennia to come.  Ideas being floated at the moment include advanced fusor concepts such as the Bussard polywell, magneto-inertial fusion, focus fusion, etc.  None of them look particularly promising to me, but if any of them pan out, the potential payoff is huge.  I’ve always been of the opinion that, if we ever do harness fusion energy, it will be by way of some such clever idea rather than by building anything like the current “conventional” inertial or magnetic fusion reactor designs.

When it comes to conventional nuclear energy, we are currently in the process of being left in the dust by countries like India and China.  Don’t expect any help from industry here.  They are in the business to make a profit.  There’s certainly nothing intrinsically wrong with that, but at the moment, profits are best maximized by building light water reactors that consume the world’s limited supply of fissile uranium 235 without breeding more fuel to replace it, and spawn long-lived and highly radioactive transuranic actinides in the process that it will be necessary to find a way to safely store for thousands of years into the future.  This may be good for profits, but it’s definitely bad for future generations.  Alternative designs exist that would breed as much new fuel as they consume, be intrinsically safe against meltdown, would destroy the actinides along with some of the worst radioactive fission products, and would leave waste that could be potentially less radioactive than the original ore in a matter of a few hundred years.  DOE’s Office of Nuclear Energy already funds some research in these areas.  Unfortunately, in keeping with the time-honored traditions of government research funding, they like to play it safe, funneling awards to “noted experts” who tend to keep plodding down well-established paths even when they are clearly leading to dead ends.  ITER and the International Space Station are costly examples of where that kind of thinking leads.  If it were really doing its job, an agency like ARPA-E might really help to shake things up a little.

Finally, we come to that scariest of boogeymen of “noted experts” the world over; cold fusion, or, as some of its advocates more reticently call it, Low Energy Nuclear Reactions (LENR).  Following the initial spate of excitement on the heels of the announcement by Pons and Fleischmann of excess heat in their experiments with palladium cells, the scientific establishment agreed that such ideas were to be denounced as heretical.  Anathemas and interdicts rained down on their remaining proponents.  Now, I must admit that I don’t have much faith in LENR myself.  I happened to attend the Cold Fusion Workshop in Sante Fe, NM which was held in 1989, not long after the Pons/Fleischmann bombshell, and saw and heard some memorably whacky posters and talks.  I’ve talked to several cold fusion advocates since then, and some appeared perfectly sober, but an unsettlingly large proportion of others seemed to be treading close to the lunatic fringe.  Just as fusion energy is always “30 years in the future,” cold fusion proponents have been claiming that their opponents will be “eating crow in six months” ever since 1989.  Some very interesting results have been reported.  Unfortunately, they haven’t been reproducible.

For all that, LENR keeps hanging around.  It continues to find advocates among those who, for one reason or another, aren’t worried about their careers, or lack respect for authority, or are just downright contrarians.  The Science of Low Energy Nuclear Reactions by Edmund Storms is a useful source for the history of and evidence for LENR.  Websites run by the cold fusion faithful may be found here and here.  Recently, stories have begun cropping up again in “respectable” mags, such as Forbes and Wired.  Limited government funding has been forthcoming from NASA Langley and, at least until recently, from the Navy at its Space and Naval Warfare Systems Command (SPAWAR).  Predictably, such funding is routinely attacked as support for scientific quackery.  The proper response to that from the source selection folks at ARPA-E should be, “So what?”  After all,

ARPA-E was created to be a catalyst for innovation. ARPA-E’s objective is to tap into the risk-taking American ethos and to identify and support the pioneers of the future. With the best research and development infrastructure in the world, a thriving innovation ecosystem in business and entrepreneurship, and a generation of youth that is willing to engage with fearless intensity, the U.S. has all the ingredients necessary for future success. The goal of ARPA-E is to harness these ingredients and make a full-court press to address the U.S.’s technological gaps and leapfrog over current energy approaches.

The best way to “harness these ingredients and make a full-court press” is not by funding of the next round of incremental improvements in rare earth magnets.  Throwing a few dollars to the LENR people, on the other hand, will certainly be “high risk,” but it just might pan out.  I hope the people at ARPA-E can work up the minimal level of courage it takes to do so.  If the Paris fashions can face down ridicule, so can they.  If they lack the nerve, then DOE would probably do better to terminate its bad imitation of DARPA and feed the money back to its existing offices.  They can continue funding mediocrity just as well as ARPA-E.

Pons & Fleischmann

Higgs Boson? What’s a Boson?

It’s been over a century since Max Planck came up with the idea that electromagnetic energy could only be emitted in fixed units called quanta as a means of explaining the observed spectrum of light from incandescent light bulbs. Starting from this point, great physicists such as Bohr, de Broglie, Schrödinger, and Dirac developed the field of quantum mechanics, revolutionizing our understanding of the physical universe. By the 1930’s it was known that matter, as well as electromagnetic energy, could be described by wave equations. In other words, at the level of the atom, particles do not behave at all as if they were billiard balls on a table, or, in general, in the way that our senses portray physical objects to us at a much larger scale. For example, electrons don’t act like hard little balls flying around outside the nuclei of atoms.  Rather, it is necessary to describe where they are in terms of probability distributions, and how they act in terms of wave functions. It is impossible to tell at any moment exactly where they are, a fact formalized mathematically in Heisenberg’s famous Uncertainty Principle. All this has profound implications for the very nature of reality, most of which, even after the passage of many decades, are still unknown to the average lay person. Among other things, it follows from all this that there are two basic types of elementary particles; fermions and bosons. It turns out that they behave in profoundly different ways, and that the idiosyncrasies of neither of them can be understood in terms of classical physics.

Sometimes the correspondence between mathematics and physical reality seems almost magical.  So it is with the math that predicts the existence of fermions and bosons.  When it was discovered that particles at the atomic level actually behave as waves, a brilliant Austrian scientist named Erwin Schrödinger came up with a now-famous wave equation to describe the phenomenon.  Derived from a few elementary assumptions based on some postulates derived by Einstein and others relating the wavelength and frequency of matter waves to physical quantities such as momentum and energy, and the behavior of waves in general, the Schrödinger equation could be solved to find wave functions.  It was found that these wave functions were complex numbers, that is, they had a real component, and an “imaginary” component that was a multiple of i, the square root of minus one.  For example, such a number might be written down mathematically as x + iy.  Each such number has a complex conjugate, found by changing the sign of the complex term.  The complex conjugate of the above number is, therefore, x – iy.  Max born found that the probability of finding a physical particle at any given point in space and time could be derived from the product of a solution to Schrödinger’s equation and its complex conjugate.

So far, so good, but eventually it was realized that there was a problem with describing particles in this way that didn’t arise in classical physics; you couldn’t tell them apart!  Elementary particles are, after all, indistinguishable.  One electron, for example, resembles every other electron like so many peas in a pod.  Suppose you could put two electrons in a glass box, and set them in motion bouncing off the walls.  Assuming you had very good eyes, you wouldn’t have any trouble telling the two of them apart if they behaved like classical billiard balls.  You would simply have to watch their trajectories as they bounced around in the box.  However, they don’t behave like billiard balls.  Their motion must be described by wave functions, and wave functions can overlap, making it impossible to tell which wave function belongs to which electron!  Trying to measure where they are won’t help, because the wave functions are changed by the very act of measurement.

All this was problematic, because if elementary particles really were indistinguishable in that way, they also had to be indistinguishable in the mathematical equations that described their behavior.  As noted above, it had been discovered that the physical attributes of a particle could be determined in terms of the product of a solution to Schrödinger’s equation and its complex conjugate.  Assuming for the moment that the two electrons in the box didn’t collide or otherwise interact with each other, that implies that the solution for the two particle system would depend on the product of the solution for both particles and their complex conjugates.  Unfortunately, the simple product didn’t work.  If the particles were labeled and the labels switched around in the solution, the answer came out different.  The particles were distinguishable!  What to do?

Well, Schrödinger’s equation has a very useful mathematical property.  It is linear.  What that means in practical terms is that if the products of the wave functions for the two particle system is a solution, then any combination of the products will also be a solution.  It was found that if the overall solution was expressed as the product of the two wave functions plus their product with the labels of the two particles interchanged, or of the product of the two wave functions minus their product with the labels interchanged, the resulting probability density function was not changed by changing around the labels.  The particles remained indistinguishable!

The solution to the Schrödinger equation, referred to mathematically as an eigenfunction, is called symmetric in the plus case, and antisymmetric in the minus case.  It turns out, however, that if you do the math, particles act in very different ways depending on whether the plus sign or the minus sign is used.  And here’s where the magic comes in.  So far with just been doing math, right?  We’ve just been manipulating symbols to get the math to come out right.  Well, as the great physicist, Richard Feynman, once put it, “To those who do not know mathematics it is difficult to get across a real feeling as to the beauty, the deepest beauty, of nature.” So it is in this case. The real particles act just as the math predicts, and in ways that are completely unexplainable in terms of classical physics!  Particles that can be described by an antisymmetric eigenfunction are called fermions, and particles that can be described by an symmetric eigenfunction are called bosons.

How do they actually differ?  Well, for reasons I won’t go into here, the so-called exclusion principle applies to fermions.  There can never be more than one of them in exactly the same quantum state.  Electrons are fermions, and that’s why they are arranged in different levels as they orbit the nucleus of an atom.  Bosons behave differently, and in ways that can be quite spectacular.  Assuming a collection of bosons can be cooled to a low enough temperature they will tend to all condense into the same low energy quantum state.  As it happens, the helium atom is a boson.  When it is cooled below a temperature of 2.18 degrees above absolute zero, it shows some very remarkable large scale quantum effects.  Perhaps the weirdest of these is superfluidity.  In this state, it behaves as if it had no viscosity at all, and can climb up the sides of a container and siphon itself out over the top!

No one really knows what matter is at a fundamental level, or why it exists at all.  However, we do know enough about it to realize that our senses only tell us how it acts at the large scales that matter to most living creatures.  They don’t tell us anything about its essence.  It’s unfortunate that now, nearly a century after some of these wonderful discoveries about the quantum world were made, so few people know anything about them.  It seems to me that knowing about them and the great scientist who made them adds a certain interest and richness to life.  If nothing else, when physicists talk about the Higgs boson, it’s nice to have some clue what they’re talking about.

Superfluid liquid helium creeping over the edge of a beaker

Fusion Update: Signs of Life from the National Ignition Facility

The National Ignition Facility, or NIF, is a huge, 192 beam laser system, located at Lawrence Livermore National Laboratory in California.  It was designed, as the name implies, to achieve thermonuclear ignition in the laboratory.  “Ignition” is generally accepted to mean getting a greater energy output from fusion than the laser input energy.  Unlike magnetic confinement fusion, the approach currently being pursued at the International Thermonuclear Experimental Reactor, or ITER, now under construction in France, the goal of the NIF is to achieve ignition via inertial confinement fusion, or ICF, in which the fuel material is compressed and heated to the extreme conditions at which fusion occurs so quickly that it is held in place by its own inertia.

The NIF has been operational for over a year now, and a two year campaign is underway with the goal of achieving ignition by the end of this fiscal year.  Recently, there has been a somewhat ominous silence from the facility, manifesting itself as a lack of publications in the major journals favored by fusion scientists.  That doesn’t usually happen when there is anything interesting to report.  Finally, however, some papers have turned up in the journal Physics of Plasmas, containing reports of significant progress.

To grasp the importance of the papers, it is necessary to understand what is supposed to occur within the NIF  target chamber for fusion to occur.  Of course, just as in magnetic fusion, the goal is to bring a mixture of deuterium and tritium, two heavy isotopes of hydrogen, to the extreme conditions at which fusion takes place.  In the ICF approach, this hydrogen “fuel” is contained in a tiny, BB-sized target.  However, the lasers are not aimed directly at the fuel “capsule.”  Instead, the capsule is suspended in the middle of a tiny cylinder made of a heavy metal like gold or uranium.  The lasers are fired through holes on each end of the cylinder, striking the interior walls, where their energy is converted to x-rays.  It is these x-rays that must actually bring the target to fusion conditions.

It was recognized many years ago that one couldn’t achieve fusion ignition by simply heating up the target.  That would require a laser driver orders of magnitude bigger than the NIF.  Instead, it is first necessary to compress, or implode, the fuel material to extremely high density.  Obviously, it is harder to “squeeze” hot material than cold material to the necessary high densities, so the fuel must be kept as “cold” as possible during the implosion process.  However, cold fuel won’t ignite, begging the question of how to heat it up once the necessary high densities have been achieved.

It turns out that the answer is shocks.  When the laser generated x-rays hit the target surface, they do so with such force that it begins to implode faster than the speed of sound.  Everyone knows that when a plane breaks the sound barrier, it, too, generates a shock, which can be heard as a sonic boom.  The same thing happens in ICF fusion targets.  When such a shock converges at the center of the target, the result is a small “hot spot” in the center of the fuel.  If the temperature in the hot spot were high enough, fusion would occur.  Each fusion reaction would release a high energy helium nucleus, or alpha particle, and a neutron.  The alpha particles would be slammed to a stop in the surrounding cold fuel material, heating it, in turn, to fusion conditions.  This would result in a fusion “burn wave” that would propagate out through the rest of the fuel, completing the fusion process.

The problem is that one shock isn’t enough to create such a “hot spot.”  Four of them are required, all precisely timed by the carefully tailored NIF laser pulse to converge at the center of the target at exactly the same time.  This is where real finesse is needed in laser fusion.  The implosion must be extremely symmetric, or the shocks will not converge properly.  The timing must be exact, and the laser pulse must deliver just the right amount of energy.

One problem in the work to date has been an inability to achieve high enough implosion velocities for the above scenario to work as planned.  One of the Physics of Plasmas papers reports that, by increasing the laser energy and replacing some of the gold originally used in the wall of the cylinder, or “hohlraum,” in which the fuel capsule is mounted with depleted uranium, velocities of 99% of those required for ignition have been achieved.  In view of the recent announcement that a shot on the NIF had exceeded its design energy of 1.8 megajoules, it appears the required velocity is within reach.  Another of the Physics of Plasmas papers dealt with the degree to which implosion asymmetries were causing harmful mixing of the surrounding cold fuel material into the imploded core of the target.  It, too, provided grounds for optimism.

In the end, I suspect the success or failure of the NIF will depend on whether the complex sequence of four shocks can really be made to work as advertised.  That will depend on the accuracy of the physics algorithms in the computer codes that have been used to model the experiments.  Time and again, earlier and less sophisticated codes have been wrong because they didn’t accurately account for all the relevant physics.  There is no guarantee that critical phenomena have not been left out of the current versions as well.  We may soon find out, if the critical series of experiments planned to achieve ignition before the end of the fiscal year are carried out as planned.

One can but hope they will succeed, if only because some of our finest scientists have dedicated their careers to the quest to achieve the elusive goal of controlled fusion.  Even if they do, fusion based on the NIF approach is unlikely to become a viable source of energy, at least in the foreseeable future.  Laser fusion may prove scientifically feasible, but getting useful energy out of it will be an engineering nightmare, dangerous because of the need to rely on highly volatile and radioactive tritium, and much too expensive to compete with potential alternatives.  I know many of the faithful in the scientific community will beg to differ with me, but, trust me, laser fusion energy aint’ gonna happen.

On the other hand, if ignition is achieved, the NIF will be invaluable to the country, not as a source of energy, but for the reason it was funded in the first place – to insure that our nation has an unmatched suite of experimental facilities to study the physics of nuclear weapons in a era free of nuclear testing.  As long as we have unique access to facilities like the NIF, which can approach the extreme physical conditions within exploding nukes, we will have a significant leg up on the competition as long as the test ban remains in place.  For that, if for no other reason, we should keep our fingers crossed that the NIF team can finally clear the last technical hurdles and reach the goal they have been working towards for so long.

Fusion ignition process,courtesy of Lawrence Livermore National Laboratory

Space Colonization and Stephen Hawking

Stephen Hawking is in the news again as an advocate for space colonization.  He raised the issue in a recent interview with the Canadian Press, and will apparently include it as a theme of his new TV series, Brave New World with Stephen Hawking, which debuts on Discovery World HD on Saturday.  There are a number of interesting aspects to the story this time around.  One that most people won’t even notice is Hawking’s reference to human nature.  Here’s what he had to say.

Our population and our use of the finite resources of planet Earth are growing exponentially, along with our technical ability to change the environment for good or ill. But our genetic code still carries the selfish and aggressive instincts that were of survival advantage in the past. It will be difficult enough to avoid disaster in the next hundred years, let alone the next thousand or million.

The fact that Hawking can matter-of-factly assert something like that about innate behavior in humans as if it were a matter of common knowledge speaks volumes about the amazing transformation in public consciousness that’s taken place in just the last 10 or 15 years.  If he’d said something like that about “selfish and aggressive instincts” 50 years ago, the entire community of experts in the behavioral sciences would have dismissed him as an ignoramus at best, and a fascist and right wing nut case at worst.  It’s astounding, really.  I’ve watched this whole story unfold in my lifetime.  It’s just as stunning as the paradigm shift from an earth-centric to a heliocentric solar system, only this time around, Copernicus and Galileo are unpersons, swept under the rug by an academic and professional community too ashamed of their own past collective imbecility to mention their names.  Look in any textbook on Sociology, Anthropology, or Evolutionary Psychology, and you’ll see what the sounds of silence look like in black and white.  Aside from a few obscure references, the whole thing is treated as if it never happened.  Be grateful, dear reader.  At last we can say the obvious without being shouted down by the “experts.”  There is such a thing as human nature.

Now look at the comments after the story in the Winnipeg Free Press I linked above.  Here are some of them.

“Our only chance of long-term survival is not to remain lurking on planet Earth, but to spread out into space.”  If that is the case, perhaps we don’t deserve to survive. If we bring destruction to our planet, would it not be in the greater interest to destroy the virus, or simply let it expire, instead of spreading its virulence throughout the galaxy?

And who would decide who gets to go? Also, “Our only chance of long-term survival is not to remain lurking on planet Earth, but to spread out into space.” What a stupid thing to say: if we can’t survive ‘lurking’ on planet Earth then who’s to say humans wouldn’t ruin things off of planet Earth?

I will not go through any of this as I will be dead by then and gone to a better place as all those who remain and go through whatever happenings in the Future,will also do!

I’ve written a lot about morality on this blog.  These comments speak to the reasons why getting it right about morality, why understanding its real nature, and why it exists, are important.  All of them are morally loaded.  As is the case with virtually all morally loaded comments, their authors couldn’t give you a coherent explanation of why they have those opinions.  They just feel that way.  I don’t doubt that they’re entirely sincere about what they say.  The genetic programming that manifests itself as human moral behavior evolved many millennia ago in creatures who couldn’t conceive of themselves as members of a worldwide species, or imagine travel into space.  What these comments demonstrate is something that’s really been obvious for a long time.  In the environment that now exists, vastly different as it is from the one in which our moral predispositions evolved, they can manifest themselves in ways that are, by any reasonable definition of the word, pathological.  In other words, they can manifest themselves in ways that no longer promote our survival, but rather the opposite.

As can be seen from the first comment, for example, thanks to our expanded consciousness of the world we live in, we can conceive of such an entity as “all mankind.”  Our moral programming predisposes us to categorize our fellow creatures into ingroups and outgroups.  In this case, “all mankind” has become an outgroup or, as the commenter puts it, a “virus.”  The demise, not only of the individual commenter, but of all mankind, has become a positive Good.  More or less the same thing can be said about the second comment.  This commenter apparently believes that it would be better for humans to become extinct than to “mess things up.”  For whom?

As for the third commenter, survival in this world is unimportant to him because he believes in eternal survival in a future imaginary world under the proprietership of an imaginary supernatural being.  It is unlikely that this attitude is more conducive to our real genetic survival than those of the first two commenters.  I submit that if these commenters had an accurate knowledge of the real nature of human morality in the first place, and were free of delusions about supernatural beings in the second, the tone of their comments would be rather different.

And what of my opinion on the matter?  In my opinion, morality is the manifestation of genetically programmed traits that evolved because they happened to promote our survival.  No doubt because I understand morality in this way, I have a subjective emotional tendency to perceive the Good as my own genetic survival, the survival of my species, and the survival of life as it has evolved on earth, not necessarily in that order.  Objectively, my version of the Good is no more legitimate or objectively valid that those of the three commenters.  In some sense, you might say it’s just a whim.  I do, however, think that my subjective feelings on the matter are reasonable.  I want to pursue as a “purpose” that which the evolution of morality happened to promote; survival.  It seems to me that an evolved, conscious biological entity that doesn’t want to survive is dysfunctional – it is sick.  I would find the realization that I am sick and dysfunctional distasteful.  Therefore, I choose to survive.  In fact, I am quite passionate about it.  I believe that, if others finally grasp the truth about what morality really is, they are likely to share my point of view.  If we agree, then we can help each other.  That is why I write about it.

By all means, then, let us colonize space, and not just our solar system, but the stars.  We can start now.  We lack sources of energy capable of carrying humans to even the nearest stars, but we can send life, even if only single-celled life.  Let us begin.

Belgium Joins the Nuclear de-Renaissance

The move away from nuclear power in Europe is becoming a stampede.  According to Reuters, the Belgians are now on the bandwagon, with plans for shutting down the country’s last reactors in 2025.  The news comes as no surprise, as the anti-nukers in Belgium have had the upper hand for some time.  However, the agreement reached by the country’s political parties has been made “conditional” on whether the energy deficit can be made up by renewable sources.  Since Belgium currently gets about 55 percent of its power from nuclear, the chances of that appear slim.  It’ s more likely that baseload power deficits will be made up with coal and gas plants that emit tons of carbon and, in the case of coal, represent a greater radioactive hazard than nuclear because of the uranium and thorium they spew into the atmosphere.  No matter.  Since Fukushima global warming hysteria is passé and anti-nuclear hysteria is back in fashion again for the professional saviors of the world.

It will be interesting to see how all this turns out in the long run.  In the short term it will certainly be a boon to China and India.  They will continue to expand their nuclear capacity and their lead in advanced nuclear technology, with a windfall of cheaper fuel thanks to Western anti-nuclear activism.  By the time the Europeans come back to the real world and finally realize that renewables aren’t going to cover all their energy needs, they will likely be forced to fall back on increasingly expensive and heavily polluting fossil fuels.  Germany is already building significant new coal-fired capacity.

Of course, we may be dealt a wild card if one of the longshot schemes for taming fusion on the cheap actually works.  The odds look long at the moment, though.  We’re hearing nothing but a stoney silence from the National Ignition Facility, which bodes ill for what seems to be the world’s last best hope to perfect inertial confinement fusion.  Things don’t look much better at ITER, the flagship facility for magnetic fusion, the other mainstream approach.  There are no plans to even fuel the facility before 2028.

DARPA’s “100 Year Starship” and Planetary Colonization

DARPA seems to have its priorities straight when it comes to space exploration.  The agency is funding what it calls the “100 Year Starship” program to study novel propulsion systems with the eventual goal of colonizing space.    Pete Worden, Director of NASA’s Ames Center, suggests that Mars might be colonized by 2030 via one-way missions.  It’s an obvious choice, really.  There’s little point in sending humans to Mars unless they’re going to stay there, and, at least from my point of view, establishing a permanent presence on the red planet is a good idea.  My point of view is based on the conclusion that, if there’s really anything that we “ought” to do, it’s survive.  Everything about us that makes us what we are evolved because it promoted our survival, so it seems that survival is a reasonable goal.  There’s no absolutely legitimate reason why we should survive, but, if we don’t, it would seem to indicate that we are a dysfunctional species, and I find that thought unpleasant.  There, in a nutshell, is my rationale for making human survival my number one priority. 

If we seek to survive then, when it comes to planets, it would be unwise to put all of our eggs in one basket.  Steven Hawking apparently agrees with me on this, as can be seen here and here. In his words,

It will be difficult enough to avoid disaster on planet Earth in the next hundred years, let alone the next thousand, or million. The human race shouldn’t have all its eggs in one basket, or on one planet. Let’s hope we can avoid dropping the basket until we have spread the load.

Not unexpectedly in this hypermoralistic age, morality is being dragged into the debate.  The usual “ethics experts” are ringing their hands about how and under what circumstances we have a “right” to colonize space, and what we must do to avoid being “immoral” in the process.  Related discussions can be found here and here.  Apparently it never occurs to people who raise such issues that human beings make moral judgments and are able to conceive of such things as “rights” only because of the existence of emotional wiring in our brains that evolved because it promoted our survival and that of our prehuman ancestors.  Since it evolved at times and under circumstances that were apparently uninfluenced by what was happening on other planets, morality and “rights” are relevant to the issue only to the extent that they muddy the waters.

Assuming that others agree with me and Dr. Hawking that survival is a desirable goal, then ultimately we must seek to move beyond our own solar system.  Unfortunately there are severe constraints on our ability to send human beings on such long voyages owing to the vast amounts of energy that would be necessary to make interstellar journey’s within human lifetimes.  For the time being, at least, we must rely on very small vessels that may take a very long time to reach their goals.  Nanotechnology is certainly part of the answer.  Tiny probes might survey the earth-like planets we discover to determine their capacity to support life.  Those found suitable should be seeded with life as soon as possible.  Again, because of energy constraints, it may only be possible to send one-celled or very simple life forms at first.  They can survive indefinitely long voyages in space, and would be the logical choice to begin seeding other planets.  Self-replicating nano-robots might then be sent capable of building a suitable environment for more complex life forms, including incubators and surrogate parents.  At that point, it would become possible to send more complex life forms, including human beings, in the form of frozen fertilized eggs.  These are some of the things we might consider doing if we consider our survival important.

Of course, any number of the pathologically pious among us might find what I’ve written above grossly immoral.  The fact remains that there is no legitimate basis for such a judgment.  Morality exists because it promoted our survival.  There can be nothing more immoral than failing to survive.

The Daedalus Starship