The NIF: Lots of Power and Energy, but No Ignition

According to a recent press release from Lawrence Livermore National Laboratory (LLNL) in California, the 192-beam National Ignition Facility (NIF) fired a 500 terawatt shot on July 5.  The world record power followed a world record energy shot of 1.89 Megajoules on July 3.  As news, this doesn’t rise above the “meh” category.  A shot at the NIF’s design energy of 1.8 Megajoules was already recorded back in March.  It’s quite true that, as NIF Director Ed Moses puts it, “NIF is becoming everything scientists planned when it was conceived over two decades ago.”  The NIF is a remarkable achievement in its own right, capable of achieving energies 50 times greater than any other laboratory facility, with pulses shaped and timed to pinpoint precision.  The NIF team in general and Ed Moses in particular deserve great credit, and the nation’s gratitude, for that achievement after turning things around following a very shaky start.

The problem is that, while the facility works as well, and even better than planned, the goal it was built to achieve continues to elude us.  As its name implies, the news everyone is actually waiting for is the announcement that ignition (defined as fusion energy out greater than laser energy in) has been achieved.  As noted in the article, Moses said back in March that “We have all the capability to make it happen in fiscal year 2012.”  At this point, he probably wishes his tone had been a mite less optimistic.  To reach their goal in the two months remaining, the NIF team will need to pull a rabbit out of their collective hat.  A slim chance remains.  Apparently the NIF’s 192 laser beams were aimed at a real ignition target with a depleted uranium capsule and deuterium-tritium fuel on July 5, and not a surrogate.  The data from that shot may prove to be a great deal more interesting than the 500 terawatt power announcement.

Meanwhile, the Russians are apparently forging ahead with plans for their own superlaser, to be capable of a whopping 2.8 Megajoules, and the Chinese are planning another about half that size, to be operational at about the same time (around 2020).  That, in itself, speaks volumes about the real significance of ignition.  It may be huge for the fusion energy community, but not that great as far as the weaponeers who actually fund these projects are concerned.  Many weapons designers at LLNL and Los Alamos were notably unenthusiastic about ignition when NIF was still in the planning stages.  What attracted them more was the extreme conditions, approaching those in an exploding nuke, that could be achieved by the lasers without ignition.  They thought, not without reason, that it would be much easier to collect useful information from such experiments than from chaotic ignition plasmas.  Apparently the Russian bomb designers agree.  They announced their laser project back in February even though LLNL’s difficulties in achieving ignition were well known at the time.

The same can be said of some of the academic types in the NIF “user community.”  It’s noteworthy that two of them, Rick Petrasso of MIT and Ray Jeanloz of UC Berkeley, whose enthusiastic comments about the 500 terawatt shot where quoted in the latest press release, are both key players in the field of high energy density physics.  Ignition isn’t a sine qua non for them either.  They will be able to harvest scores of papers from the NIF whether it achieves ignition or not.

The greatest liability of not achieving early ignition may be the evaporation of political support for the NIF.  The natives are already becoming restless.  As noted in the Livermore Independent,

In early May, sounding as if it were discussing an engineering project rather than advanced research, the House Appropriations Committee worried that NIF’s “considerable costs will not have been warranted” if it does not achieve ignition by September 30, the end of the federal fiscal year.

and,

Later that month, in a tone that seemed to demand that research breakthroughs take place according to schedule, the House Armed Services Committee recommended that NIF’s ignition research budget for next year be cut by $30 million from the requested $84 million budget unless NIF achieves ignition by September 30.

Funding cuts at this point, after we have come so far, and are so close to the goal, would be short-sighted indeed.  One must hope that a Congress capable of squandering billions on white elephants like the International Space Station will not become penny-wise and pound-foolish about funding a project that really matters.

Group Selection Plays the “Virtue” Card

I know, I’ve been a mite heavy on the group selection stuff lately, but I can’t help it. Recent developments touched off by the publication of E. O. Wilson’s The Social Conquest of Earth are, to coin a term, “fascinating,” if you know the history of the theory and the controversy surrounding it. The latest plot twist is the appearance of an article by group selection proponent Martin Nowak entitled “Why We Help,” as the cover story in the latest edition of Scientific American. Nowak was co-author with Wilson and Corina Tarnita of a hard-core group selection paper entitled The Evolution of Eusociality that appeared in Nature in August 2010. I say “hard-core” because the paper included a section announcing the “fall of inclusive fitness theory,” a claim alluded to by Wilson in his book as if it were an accomplished fact. This drew immediate counter-blasts from inclusive fitness theorists such as Richard Dawkins, Steven Pinker and Jerry Coyne. Now, perhaps all unbeknownst to themselves, the group selectionists have played the “virtue” card.

Scientific American, as it happens, should have been renamed Politically Correct American long ago. Its editors are relentless promoters of the “progressive” version of the Good. Enter Martin Nowak, with an article about the evolution of cooperation, a progressive Good if ever there was one. To make sure its readers get the point, SA added the following blurb on the cover: “The Evolution of Cooperation; Competition is not the only force that shaped life on earth.” Competition is, of course, anathema to all right thinkers on the left. The Dawkins/Pinker faction, on the other hand, has stressed the notion of the “selfish” gene, which they associate with innate “selfish” human behaviors. If history is any guide, they are treading on thin ice. In the past, Scientific American has responded to such deviations from the “correct” line with thinly veiled hints that their authors are “conservative,” or even, heaven forefend, fascist!

Group selectionists have long had the virtue card up their sleeves. For example, Mark Borrello cites saintly anarchist godfather Peter Kropotkin (fondly referred to by Lenin as “that old fool Kropotkin”) as an early advocate of the idea in his book, Evolutionary Restraints:

Kropotkin argued (in a series of articles published between 1905 and 1919, ed.) that in the course of the struggle against the environment, species were more apt to practice mutual aid, and that cooperative species would increase in numbers and outlast their individualistic rivals. In this scenario, natural selection ceases to be “a selection of haphazard variations, but becomes a physiological selection of those individuals, societies and groups which are best capable of meeting the new requirements by new adaptations of their tissues, organs and habits. It operates largely as a selection of groups of individuals, modified all at once, more or less, in a given direction.

Of course, Kropotkin was a political ideologue, and political ideologues have a habit of construing “reality” to favor whatever flavor of utopia they happen to prefer. I’m not aware of the political proclivities of Nowak, and have no evidence that his theories are tainted by ideology. However, there are some hints in the article, perhaps reflecting the context (Scientific American) in which he is writing. For example,

As the human population expands and the climate changes, we will need to harness that adaptability and figure out ways to work together to save the planet and its inhabitants.

and,

Policy makers should take note of indirect reciprocity and the importance of information and reputation in keeping defectors in check. And they should exploit the capacity of these factors to make better cooperators of us all in the mother of all public goods games: the seven-billion-person mission to conserve the rapidly dwindling resources of planet Earth.

It is interesting that Nowak is very reserved about his advocacy of group selection in the paper. Instead, he cites his background in the mathematics of game theory. Group theory is only mentioned in passing as the last of five mechanisms that may have contributed to the evolution of cooperation. As Nowak puts it,

Last, individuals may perform selfless acts for the greater good, as opposed to abetting a single peer. This fifth means by which cooperation may take root is known as group selection.

No matter, at this point, “Nowak” and “group selection” are virtually synonymous among evolutionary biologists, so they’ll get the drift, although most of them would probably dispute the fact that the acts involved are really “selfless.” Still, “selfless acts for the greater good” hits the right tone for an article in Scientific American.

And so continues the melodramatic career of the theory of group selection. Used by Steven Pinker as a pretext to dismiss the life work of the most effective and influential debunker of the Blank Slate, Robert Ardrey, in a single paragraph as “totally and utterly wrong” in his comical “history” of the Blank Slate, it would seem the theory has now risen from the grave. Pinker had better step lively, or he may soon find himself on the wrong side of the “virtue” line.  There may be poetic justice in science after all.

Higgs Boson? What’s a Boson?

It’s been over a century since Max Planck came up with the idea that electromagnetic energy could only be emitted in fixed units called quanta as a means of explaining the observed spectrum of light from incandescent light bulbs. Starting from this point, great physicists such as Bohr, de Broglie, Schrödinger, and Dirac developed the field of quantum mechanics, revolutionizing our understanding of the physical universe. By the 1930’s it was known that matter, as well as electromagnetic energy, could be described by wave equations. In other words, at the level of the atom, particles do not behave at all as if they were billiard balls on a table, or, in general, in the way that our senses portray physical objects to us at a much larger scale. For example, electrons don’t act like hard little balls flying around outside the nuclei of atoms.  Rather, it is necessary to describe where they are in terms of probability distributions, and how they act in terms of wave functions. It is impossible to tell at any moment exactly where they are, a fact formalized mathematically in Heisenberg’s famous Uncertainty Principle. All this has profound implications for the very nature of reality, most of which, even after the passage of many decades, are still unknown to the average lay person. Among other things, it follows from all this that there are two basic types of elementary particles; fermions and bosons. It turns out that they behave in profoundly different ways, and that the idiosyncrasies of neither of them can be understood in terms of classical physics.

Sometimes the correspondence between mathematics and physical reality seems almost magical.  So it is with the math that predicts the existence of fermions and bosons.  When it was discovered that particles at the atomic level actually behave as waves, a brilliant Austrian scientist named Erwin Schrödinger came up with a now-famous wave equation to describe the phenomenon.  Derived from a few elementary assumptions based on some postulates derived by Einstein and others relating the wavelength and frequency of matter waves to physical quantities such as momentum and energy, and the behavior of waves in general, the Schrödinger equation could be solved to find wave functions.  It was found that these wave functions were complex numbers, that is, they had a real component, and an “imaginary” component that was a multiple of i, the square root of minus one.  For example, such a number might be written down mathematically as x + iy.  Each such number has a complex conjugate, found by changing the sign of the complex term.  The complex conjugate of the above number is, therefore, x – iy.  Max born found that the probability of finding a physical particle at any given point in space and time could be derived from the product of a solution to Schrödinger’s equation and its complex conjugate.

So far, so good, but eventually it was realized that there was a problem with describing particles in this way that didn’t arise in classical physics; you couldn’t tell them apart!  Elementary particles are, after all, indistinguishable.  One electron, for example, resembles every other electron like so many peas in a pod.  Suppose you could put two electrons in a glass box, and set them in motion bouncing off the walls.  Assuming you had very good eyes, you wouldn’t have any trouble telling the two of them apart if they behaved like classical billiard balls.  You would simply have to watch their trajectories as they bounced around in the box.  However, they don’t behave like billiard balls.  Their motion must be described by wave functions, and wave functions can overlap, making it impossible to tell which wave function belongs to which electron!  Trying to measure where they are won’t help, because the wave functions are changed by the very act of measurement.

All this was problematic, because if elementary particles really were indistinguishable in that way, they also had to be indistinguishable in the mathematical equations that described their behavior.  As noted above, it had been discovered that the physical attributes of a particle could be determined in terms of the product of a solution to Schrödinger’s equation and its complex conjugate.  Assuming for the moment that the two electrons in the box didn’t collide or otherwise interact with each other, that implies that the solution for the two particle system would depend on the product of the solution for both particles and their complex conjugates.  Unfortunately, the simple product didn’t work.  If the particles were labeled and the labels switched around in the solution, the answer came out different.  The particles were distinguishable!  What to do?

Well, Schrödinger’s equation has a very useful mathematical property.  It is linear.  What that means in practical terms is that if the products of the wave functions for the two particle system is a solution, then any combination of the products will also be a solution.  It was found that if the overall solution was expressed as the product of the two wave functions plus their product with the labels of the two particles interchanged, or of the product of the two wave functions minus their product with the labels interchanged, the resulting probability density function was not changed by changing around the labels.  The particles remained indistinguishable!

The solution to the Schrödinger equation, referred to mathematically as an eigenfunction, is called symmetric in the plus case, and antisymmetric in the minus case.  It turns out, however, that if you do the math, particles act in very different ways depending on whether the plus sign or the minus sign is used.  And here’s where the magic comes in.  So far with just been doing math, right?  We’ve just been manipulating symbols to get the math to come out right.  Well, as the great physicist, Richard Feynman, once put it, “To those who do not know mathematics it is difficult to get across a real feeling as to the beauty, the deepest beauty, of nature.” So it is in this case. The real particles act just as the math predicts, and in ways that are completely unexplainable in terms of classical physics!  Particles that can be described by an antisymmetric eigenfunction are called fermions, and particles that can be described by an symmetric eigenfunction are called bosons.

How do they actually differ?  Well, for reasons I won’t go into here, the so-called exclusion principle applies to fermions.  There can never be more than one of them in exactly the same quantum state.  Electrons are fermions, and that’s why they are arranged in different levels as they orbit the nucleus of an atom.  Bosons behave differently, and in ways that can be quite spectacular.  Assuming a collection of bosons can be cooled to a low enough temperature they will tend to all condense into the same low energy quantum state.  As it happens, the helium atom is a boson.  When it is cooled below a temperature of 2.18 degrees above absolute zero, it shows some very remarkable large scale quantum effects.  Perhaps the weirdest of these is superfluidity.  In this state, it behaves as if it had no viscosity at all, and can climb up the sides of a container and siphon itself out over the top!

No one really knows what matter is at a fundamental level, or why it exists at all.  However, we do know enough about it to realize that our senses only tell us how it acts at the large scales that matter to most living creatures.  They don’t tell us anything about its essence.  It’s unfortunate that now, nearly a century after some of these wonderful discoveries about the quantum world were made, so few people know anything about them.  It seems to me that knowing about them and the great scientist who made them adds a certain interest and richness to life.  If nothing else, when physicists talk about the Higgs boson, it’s nice to have some clue what they’re talking about.

Superfluid liquid helium creeping over the edge of a beaker

Fusion Update: Signs of Life from the National Ignition Facility

The National Ignition Facility, or NIF, is a huge, 192 beam laser system, located at Lawrence Livermore National Laboratory in California.  It was designed, as the name implies, to achieve thermonuclear ignition in the laboratory.  “Ignition” is generally accepted to mean getting a greater energy output from fusion than the laser input energy.  Unlike magnetic confinement fusion, the approach currently being pursued at the International Thermonuclear Experimental Reactor, or ITER, now under construction in France, the goal of the NIF is to achieve ignition via inertial confinement fusion, or ICF, in which the fuel material is compressed and heated to the extreme conditions at which fusion occurs so quickly that it is held in place by its own inertia.

The NIF has been operational for over a year now, and a two year campaign is underway with the goal of achieving ignition by the end of this fiscal year.  Recently, there has been a somewhat ominous silence from the facility, manifesting itself as a lack of publications in the major journals favored by fusion scientists.  That doesn’t usually happen when there is anything interesting to report.  Finally, however, some papers have turned up in the journal Physics of Plasmas, containing reports of significant progress.

To grasp the importance of the papers, it is necessary to understand what is supposed to occur within the NIF  target chamber for fusion to occur.  Of course, just as in magnetic fusion, the goal is to bring a mixture of deuterium and tritium, two heavy isotopes of hydrogen, to the extreme conditions at which fusion takes place.  In the ICF approach, this hydrogen “fuel” is contained in a tiny, BB-sized target.  However, the lasers are not aimed directly at the fuel “capsule.”  Instead, the capsule is suspended in the middle of a tiny cylinder made of a heavy metal like gold or uranium.  The lasers are fired through holes on each end of the cylinder, striking the interior walls, where their energy is converted to x-rays.  It is these x-rays that must actually bring the target to fusion conditions.

It was recognized many years ago that one couldn’t achieve fusion ignition by simply heating up the target.  That would require a laser driver orders of magnitude bigger than the NIF.  Instead, it is first necessary to compress, or implode, the fuel material to extremely high density.  Obviously, it is harder to “squeeze” hot material than cold material to the necessary high densities, so the fuel must be kept as “cold” as possible during the implosion process.  However, cold fuel won’t ignite, begging the question of how to heat it up once the necessary high densities have been achieved.

It turns out that the answer is shocks.  When the laser generated x-rays hit the target surface, they do so with such force that it begins to implode faster than the speed of sound.  Everyone knows that when a plane breaks the sound barrier, it, too, generates a shock, which can be heard as a sonic boom.  The same thing happens in ICF fusion targets.  When such a shock converges at the center of the target, the result is a small “hot spot” in the center of the fuel.  If the temperature in the hot spot were high enough, fusion would occur.  Each fusion reaction would release a high energy helium nucleus, or alpha particle, and a neutron.  The alpha particles would be slammed to a stop in the surrounding cold fuel material, heating it, in turn, to fusion conditions.  This would result in a fusion “burn wave” that would propagate out through the rest of the fuel, completing the fusion process.

The problem is that one shock isn’t enough to create such a “hot spot.”  Four of them are required, all precisely timed by the carefully tailored NIF laser pulse to converge at the center of the target at exactly the same time.  This is where real finesse is needed in laser fusion.  The implosion must be extremely symmetric, or the shocks will not converge properly.  The timing must be exact, and the laser pulse must deliver just the right amount of energy.

One problem in the work to date has been an inability to achieve high enough implosion velocities for the above scenario to work as planned.  One of the Physics of Plasmas papers reports that, by increasing the laser energy and replacing some of the gold originally used in the wall of the cylinder, or “hohlraum,” in which the fuel capsule is mounted with depleted uranium, velocities of 99% of those required for ignition have been achieved.  In view of the recent announcement that a shot on the NIF had exceeded its design energy of 1.8 megajoules, it appears the required velocity is within reach.  Another of the Physics of Plasmas papers dealt with the degree to which implosion asymmetries were causing harmful mixing of the surrounding cold fuel material into the imploded core of the target.  It, too, provided grounds for optimism.

In the end, I suspect the success or failure of the NIF will depend on whether the complex sequence of four shocks can really be made to work as advertised.  That will depend on the accuracy of the physics algorithms in the computer codes that have been used to model the experiments.  Time and again, earlier and less sophisticated codes have been wrong because they didn’t accurately account for all the relevant physics.  There is no guarantee that critical phenomena have not been left out of the current versions as well.  We may soon find out, if the critical series of experiments planned to achieve ignition before the end of the fiscal year are carried out as planned.

One can but hope they will succeed, if only because some of our finest scientists have dedicated their careers to the quest to achieve the elusive goal of controlled fusion.  Even if they do, fusion based on the NIF approach is unlikely to become a viable source of energy, at least in the foreseeable future.  Laser fusion may prove scientifically feasible, but getting useful energy out of it will be an engineering nightmare, dangerous because of the need to rely on highly volatile and radioactive tritium, and much too expensive to compete with potential alternatives.  I know many of the faithful in the scientific community will beg to differ with me, but, trust me, laser fusion energy aint’ gonna happen.

On the other hand, if ignition is achieved, the NIF will be invaluable to the country, not as a source of energy, but for the reason it was funded in the first place – to insure that our nation has an unmatched suite of experimental facilities to study the physics of nuclear weapons in a era free of nuclear testing.  As long as we have unique access to facilities like the NIF, which can approach the extreme physical conditions within exploding nukes, we will have a significant leg up on the competition as long as the test ban remains in place.  For that, if for no other reason, we should keep our fingers crossed that the NIF team can finally clear the last technical hurdles and reach the goal they have been working towards for so long.

Fusion ignition process,courtesy of Lawrence Livermore National Laboratory

Space Colonization and Stephen Hawking

Stephen Hawking is in the news again as an advocate for space colonization.  He raised the issue in a recent interview with the Canadian Press, and will apparently include it as a theme of his new TV series, Brave New World with Stephen Hawking, which debuts on Discovery World HD on Saturday.  There are a number of interesting aspects to the story this time around.  One that most people won’t even notice is Hawking’s reference to human nature.  Here’s what he had to say.

Our population and our use of the finite resources of planet Earth are growing exponentially, along with our technical ability to change the environment for good or ill. But our genetic code still carries the selfish and aggressive instincts that were of survival advantage in the past. It will be difficult enough to avoid disaster in the next hundred years, let alone the next thousand or million.

The fact that Hawking can matter-of-factly assert something like that about innate behavior in humans as if it were a matter of common knowledge speaks volumes about the amazing transformation in public consciousness that’s taken place in just the last 10 or 15 years.  If he’d said something like that about “selfish and aggressive instincts” 50 years ago, the entire community of experts in the behavioral sciences would have dismissed him as an ignoramus at best, and a fascist and right wing nut case at worst.  It’s astounding, really.  I’ve watched this whole story unfold in my lifetime.  It’s just as stunning as the paradigm shift from an earth-centric to a heliocentric solar system, only this time around, Copernicus and Galileo are unpersons, swept under the rug by an academic and professional community too ashamed of their own past collective imbecility to mention their names.  Look in any textbook on Sociology, Anthropology, or Evolutionary Psychology, and you’ll see what the sounds of silence look like in black and white.  Aside from a few obscure references, the whole thing is treated as if it never happened.  Be grateful, dear reader.  At last we can say the obvious without being shouted down by the “experts.”  There is such a thing as human nature.

Now look at the comments after the story in the Winnipeg Free Press I linked above.  Here are some of them.

“Our only chance of long-term survival is not to remain lurking on planet Earth, but to spread out into space.”  If that is the case, perhaps we don’t deserve to survive. If we bring destruction to our planet, would it not be in the greater interest to destroy the virus, or simply let it expire, instead of spreading its virulence throughout the galaxy?

And who would decide who gets to go? Also, “Our only chance of long-term survival is not to remain lurking on planet Earth, but to spread out into space.” What a stupid thing to say: if we can’t survive ‘lurking’ on planet Earth then who’s to say humans wouldn’t ruin things off of planet Earth?

I will not go through any of this as I will be dead by then and gone to a better place as all those who remain and go through whatever happenings in the Future,will also do!

I’ve written a lot about morality on this blog.  These comments speak to the reasons why getting it right about morality, why understanding its real nature, and why it exists, are important.  All of them are morally loaded.  As is the case with virtually all morally loaded comments, their authors couldn’t give you a coherent explanation of why they have those opinions.  They just feel that way.  I don’t doubt that they’re entirely sincere about what they say.  The genetic programming that manifests itself as human moral behavior evolved many millennia ago in creatures who couldn’t conceive of themselves as members of a worldwide species, or imagine travel into space.  What these comments demonstrate is something that’s really been obvious for a long time.  In the environment that now exists, vastly different as it is from the one in which our moral predispositions evolved, they can manifest themselves in ways that are, by any reasonable definition of the word, pathological.  In other words, they can manifest themselves in ways that no longer promote our survival, but rather the opposite.

As can be seen from the first comment, for example, thanks to our expanded consciousness of the world we live in, we can conceive of such an entity as “all mankind.”  Our moral programming predisposes us to categorize our fellow creatures into ingroups and outgroups.  In this case, “all mankind” has become an outgroup or, as the commenter puts it, a “virus.”  The demise, not only of the individual commenter, but of all mankind, has become a positive Good.  More or less the same thing can be said about the second comment.  This commenter apparently believes that it would be better for humans to become extinct than to “mess things up.”  For whom?

As for the third commenter, survival in this world is unimportant to him because he believes in eternal survival in a future imaginary world under the proprietership of an imaginary supernatural being.  It is unlikely that this attitude is more conducive to our real genetic survival than those of the first two commenters.  I submit that if these commenters had an accurate knowledge of the real nature of human morality in the first place, and were free of delusions about supernatural beings in the second, the tone of their comments would be rather different.

And what of my opinion on the matter?  In my opinion, morality is the manifestation of genetically programmed traits that evolved because they happened to promote our survival.  No doubt because I understand morality in this way, I have a subjective emotional tendency to perceive the Good as my own genetic survival, the survival of my species, and the survival of life as it has evolved on earth, not necessarily in that order.  Objectively, my version of the Good is no more legitimate or objectively valid that those of the three commenters.  In some sense, you might say it’s just a whim.  I do, however, think that my subjective feelings on the matter are reasonable.  I want to pursue as a “purpose” that which the evolution of morality happened to promote; survival.  It seems to me that an evolved, conscious biological entity that doesn’t want to survive is dysfunctional – it is sick.  I would find the realization that I am sick and dysfunctional distasteful.  Therefore, I choose to survive.  In fact, I am quite passionate about it.  I believe that, if others finally grasp the truth about what morality really is, they are likely to share my point of view.  If we agree, then we can help each other.  That is why I write about it.

By all means, then, let us colonize space, and not just our solar system, but the stars.  We can start now.  We lack sources of energy capable of carrying humans to even the nearest stars, but we can send life, even if only single-celled life.  Let us begin.

Belgium Joins the Nuclear de-Renaissance

The move away from nuclear power in Europe is becoming a stampede.  According to Reuters, the Belgians are now on the bandwagon, with plans for shutting down the country’s last reactors in 2025.  The news comes as no surprise, as the anti-nukers in Belgium have had the upper hand for some time.  However, the agreement reached by the country’s political parties has been made “conditional” on whether the energy deficit can be made up by renewable sources.  Since Belgium currently gets about 55 percent of its power from nuclear, the chances of that appear slim.  It’ s more likely that baseload power deficits will be made up with coal and gas plants that emit tons of carbon and, in the case of coal, represent a greater radioactive hazard than nuclear because of the uranium and thorium they spew into the atmosphere.  No matter.  Since Fukushima global warming hysteria is passé and anti-nuclear hysteria is back in fashion again for the professional saviors of the world.

It will be interesting to see how all this turns out in the long run.  In the short term it will certainly be a boon to China and India.  They will continue to expand their nuclear capacity and their lead in advanced nuclear technology, with a windfall of cheaper fuel thanks to Western anti-nuclear activism.  By the time the Europeans come back to the real world and finally realize that renewables aren’t going to cover all their energy needs, they will likely be forced to fall back on increasingly expensive and heavily polluting fossil fuels.  Germany is already building significant new coal-fired capacity.

Of course, we may be dealt a wild card if one of the longshot schemes for taming fusion on the cheap actually works.  The odds look long at the moment, though.  We’re hearing nothing but a stoney silence from the National Ignition Facility, which bodes ill for what seems to be the world’s last best hope to perfect inertial confinement fusion.  Things don’t look much better at ITER, the flagship facility for magnetic fusion, the other mainstream approach.  There are no plans to even fuel the facility before 2028.

Of Ingroups, Outgroups, and Global Climate Change

As I pointed out in my last post, “The outgroup have ye always with you.” Of all the very good reasons for mankind to give up the cobbling together of new moral systems once and for all, it’s probably the best. It’s more likely you’ll find a unicorn browsing in your back yard than one of the pathologically pious among us suffused with the milk of human kindness. One typically finds them in their “ground state,” frothing at the the mouth with virtuous indignation over the latest sins of their preferred outgroup.

So it is with Eugene Robinson, one of their number who happens to pen an occasional column in the Washington Post. He recently delivered himself of some observations concerning the phenomenon of global warming. As anyone who hasn’t been asleep for the last decade will be aware, no branch of the sciences has been more afflicted of late by the attentions of the professionally righteous than climatology. Robinson gives us a good example of how the neat separation of climate scientists into good guys and bad guys works in practice.

Hero of his piece is one Richard Muller, a physicist at the University of California at Berkeley who, we learn, once dismissed “climate alarmism” as “shoddy science.” Not to worry. Though once lost, he is now found, and though once blind, he now sees. It turns out the scales fell from his eyes after he “launched his own comprehensive study (referred to as the Berkeley Earth Surface Temperature, or BEST, study, ed.) to set the record straight,” and discovered that, lo and behold, “Global warming is real.” Well, perhaps it is and perhaps it isn’t. I happen to believe that the arguments as to why it should be real are plausible enough, but that’s beside the point as far as this post is concerned.

What is to the point is Robinson’s reaction to all this. For him, Muller’s study isn’t just another batch of data points relating to a very complex scientific issue. For him, global warming is an absolute and incontrovertable certainty, because it represents the “good.” Muller’s study is, therefore, not just a scientific study, but a victory in the eternal battle of good versus evil. In Robinson’s own words,

For the clueless or cynical diehards who deny global warming, it’s getting awfully cold out there.

Rick Perry, Herman Cain, Michele Bachmann and the rest of the neo-Luddites who are turning the GOP into the anti-science party should pay attention.

But Muller’s plain-spoken admonition that “you should not be a skeptic, at least not any longer,” has reduced many deniers to incoherent grumbling or stunned silence.

and so on. As it happens, not all of the “skeptics” have been reduced to incoherent grumbling or stunned silence. Take, for example, Judith Curry, a distinguished climate researcher and Chair of the Department of Earth and Atmospheric Sciences at Georgia Tech. She was actually a member of Muller’s team, and so is presumably familiar with the copious data Robinson was so enthused about. However, in an interview for the Daily Mail, Curry accuses Muller of “trying to mislead the public by hiding the fact that BEST’s research shows global warming has stopped.” She also says that, “Prof. Muller’s claim that he has proven global warming sceptics wrong was also a ‘huge mistake’, with no scientific basis,” and goes so far as to compare the affair to “Climategate.” This is strong stuff, but Prof. Curry has the goods. She notes that, in carefully sifting through, as Robinson informs us, “1.6 billion records,” Muller somehow failed to mention that, according to BEST’s own data, “there has been no significant increase in world temperatures since the end of the 90’s.” The following two graphs from the website of the Global Warming Policy Foundation summarize that data:

Source: Global Warming Policy Foundation

It would seem that the good Prof. Muller, who had much to say about the first graph, complete with “hockey stick,” somehow forgot to mention the data in the second. In fact, as Prof. Curry put it, “…in the wake of the unexpected global warming standstill, many climate scientists who had previously rejected sceptics’ arguments were now taking them much more seriously.”

The Daily Mail article contains much else in the way of less than pleased reactions by a number of other climatologists at what was apparently a premature release of the BEST data before the peer review process was complete. Of course, all this fits very ill with the lurid picture of good triumphing over evil painted for us by Mr. Robinson. Predictably, while he was apparently observant enough to turn up any number of “grumbling and stunned” warming deniers, when it came to Prof. Curry and her equally chagrined colleagues, he didn’t notice a thing.

It should come as no surprise. Mr. Curry is merely acting as one might expect of a member of a species endowed with certain innate behavioral characteristics. Some of those traits give rise to what is commonly referred to as moral behavior, and none of us are free of their emotional grip. That’s why Hollywood still makes movies about good guys and bad guys. It is our subjective nature to perceive sublime good, but the yin of sublime good cannot exist without the yang of despicable evil. Every ingroup implies an outgroup. There is little we can do to change our nature, and we would probably be unwise to try given our current intellectual endowments. We can, however, while accepting it for what it is, seek to find ways of channeling its expression in ways less destructive than we have experienced in the past. At the very least we need to understand it and develop an awareness of how it affects our behavior. The results of failing to do so in the past have been destructive enough, and have certainly made a hash of the science of climatology. The results of failing to do so in the future are unlikely to be any more encouraging.

The Great Heisenberg Uncertainty

Few of the great scientific principles have been more abused than that of the famous German physicist, Werner Heisenberg.  Known as the Heisenberg Uncertainty Principle, it states that no pair of certain physical quantities, such as position and momentum, can both be known at the same time with a precision greater than a certain very small number.  It was one of the great discoveries in quantum mechanics in the 1920’s, a decade studded with such discoveries that resulted in the development of modern quantum theory.  Among other things, the modern theory states in mathematical terms the implications of the wave nature of both matter and energy.  That mathematics can be used to derive Heisenberg’s famous principle. 

Unfortunately, because many things in life are uncertain, the principle has been abundantly misapplied to a whole range of uncertainties to which it has no relevance whatsoever, just as the Theory of Relativity has been misapplied to a whole host of things that happen to be relative to each other in one way or another.  Some of the misapplications and misconceptions are more subtle than others.  I recently ran across an interesting one in a book entitled Swarm Intelligence, by James Kennedy and Russell Eberhart, the former a social psychologist and the latter an expert in evolutionary computation.  The book argues that “intelligent human cognition derives from the interactions of individuals in a social world and that the sociocognitive view can be effectively applied to computationally intelligent systems.”  I actually bought it to try programming a few of the computational examples contained therein, but found that it was a much a statement of ideology as of computational theory, larded with all the usual illusions to all the usual suspects among the philosophers who are currently fashionable in works of that genre.   Among other things there is a discussion on page 11 of whether such a thing as “true randomness” exists, or whether, on the contrary, in the words of the authors, “The basis of observed randomness is our incomplete knowledge of the world.  A seemingly random set of events may have a perfectly good explanation; that is, it may be perfectly compressible.” 

As you may have gathered, all this eventually relates to the question of free will, and whether the universe is truly random or “stochastic” at some level, or, on the contrary, purely deterministic.  I will not presume to answer that fascinating question here.  However, the authors appear to be of the opinion that the latter is the case.  What caught my eye was one of the arguments they used to support that point of view.  Allow me to quote them at length:

For most of the 20th century it was thought that “true” randomness existed at the subatomic level. Results from double-slit experiments and numerous thought experiments had convinced quantum physicists that subatomic entities such as photons should be conceptualized both as particles and waves. In their wave form, such objects were thought to occupy a state that was truly stochastic, a probability distribution, and their position and momentum were not fixed until they were observed. In one of the classic scientific debates of 20th-century physics, Niels Bohr argued that a particles’s state was truly, unkowably random, while Einstein argued vigorously that this must be impossible: “God does not play dice.” Until very recently, Bohr was considered the winner of the dispute, and quantum events were considered to be perhaps the only example of true stochasticity in the universe. But in 1998, physicists Duerr, Nonn, and Rempe disproved Bohr’s theorizing, which had been based on Heisenberg’s uncertainty principle. The real source of quantum “randomness” is now believed to be the interactions or “entanglements” of particles, whose behavior is in fact deterministic.”

In fact, the paper referred to, entitled “Origin of quantum-mechanical complementarity probed by a ‘which-way’ experiment in an atom interferometer,” is an elegant piece of work, but in no way, shape or form does it demonstrate that the “real source of quantum randomness is the interactions of “entanglements” of particles, whose behavior is in fact deterministic.”  It can be found in its entirety here. The math and scientific notation are a little dense, but if you simply trust the authors of the paper on those matters, and just look at the discussion and conclusions at the end, you should be able to see without too much difficulty that it in no way has the significance that Kennedy and Eberhart assign to it.  They seem to think that it somehow represents a refutation of the Heisenberg principle.  In fact, the authors of the paper explicitly state the contrary. 

Their paper is one of the many that have sought to shed light on the famous experiment in which interference patterns are formed by particles passing through a double slit, even when single particles are passed through one at a time, defeating any attempt to explain the phenomenon based on classical (non-quantum) physics.  The question they attempt to answer is not whether the Heisenberg principle is itself valid or not, but merely whether the principle must be invoked to explain the fact that “measuring” which one of the slits each particle passes through causes the loss of the interference pattern, or, on the contrary, some other mechanism can enforce the change.  It turns out that, in fact, the Heisenberg principle is not necessary.  Which of the “slits” (in this case the experiment is done with standing light waves rather than physical slits) each particle passes through can be measured by much more subtle means that have orders of magnitude less effect on particle momentum than would be necessary to justify invoking it.  In other words, what the authors are really saying is, not that the Heisenberg principle is wrong, or has been superceded by some new “deterministic” theory, but merely that it is not true that it must be invoked to explain “complementarity,” or the ability of quantum mechanical entities to behave as either particles or waves.

All this is very intriguing.  One wonders to what extent this meme that the experiment in question “proves” that we live in a deterministic universe is making the rounds among people who don’t actually understand its implications one way or the other.  It would hardly be the first time that authors have been cited as authorities for ideas that never appeared in their work.  To what extent do the authors of the paper realize they’ve become “famous” in this way?

And what of the great questions of free will, and whether we live in a deterministic or stochastic universe?  The world is full of people who are cocksure they know the answer.  They just don’t agree on what it is.  Alas, I fear we are not at the point at which we can really say one way or the other.  Before that can happen, it will be necessary for us to figure out the fundamental nature of all this stuff around us, and why it all exists to begin with.  We are yet far from having that knowledge, a fact that makes life that much more exciting.  There are still great new worlds for us to discover out there.

Werner Heisenberg

What is “Real Science?”

In our ideology drenched times, it’s the same thing as “good science:”  anything that happens to agree with your ideological narrative. 

Powerline just served up a typical example relating to that über-politicized topic, global warming. According to the “good scientists” at Powerline, global warming theories are all wrong because they are currently experiencing snow and cold weather in Great Britain. Quoting from the article:

It’s fun to ridicule the warmists because they are so often wrong, but their errors are in fact significant: a scientific theory that implies predictions that turn out to be wrong, is false. A principal feature of climate hysteria is its proponents’ unwillingness to be judged by the standards that govern real science.

I know of not a single reputable climate scientist who would claim their theories “imply the prediction” that localized incidences of cold weather on the planet will no longer occur. In view of the solid evidence that, overall, the planet has, indeed, been warming over the past decade, I would like to know on what evidence Powerline is basing the claim that “warmists” are “so often wrong.” It’s rather cold in the DC area today, too. Does that also “disprove” global warming?

It’s not hard to find the same phenomenon on the other side of the ideological spectrum. There we often hear the claim that theories that significant global warming will occur over, say, the next century have been “proved.” This is “good science” in the same sense as Powerline’s claims about the cold weather in Britain. In the first place, the computational models on which such claims are based are just that; models. Even the best computational models are approximations.  Computational models of climate are far from the best.  Ideally, they would need to account for billions of degrees of freedom just to model the atmosphere alone, not to mention the coupling of the atmosphere with the oceans, etc.  No computer on earth, either now or in the foreseeable future, is capable of solving such a problem without severe simplifying assumptions.  The mathematical error bars on those assumptions have never been rigorously proved.  Throw in the fact that the data is noisy and often corrupt or nonexistent, and the best models are themselves probabilistic and not deterministic, and the claim that they “prove” anything is absurd.

“Proved” is much too strong a term, but I would buy the claim that significant long term global warming is probable.  Given the fact that this is the only planet we have to live on at the moment, it doesn’t make a lot of sense to me that we should be rocking the boat.  I doubt that “science” will offer any solutions, though.  The hardening of ideological dogmas on both sides won’t allow it.  Whatever decisions are finally made, they are far more likely to be based on politics than science.

More Thorium Silliness

Thorium is a promising candidate as a future source of energy.  I just wonder what it is about the stuff that inspires so many people to write nonsense about it.  It doesn’t take a Ph.D. in physics to spot the mistakes.  Most of them should be obvious to anyone who’s taken the trouble to read a high school science book.  Another piece of misinformation has just turned up at the website of Popular Mechanics, dubiously titled The Truth about Thorium and Nuclear Power.

The byline claims that, “Thorium has nearly 200 times the energy content of uranium,” a statement I will assume reflects the ignorance of the writer rather than any outright attempt to deceive. She cites physicist Carlo Rubbia as the source, but if he ever said anything of the sort, he was making some very “special” assumptions about the energy conversion process that she didn’t quite understand. I assume it must have had something to do with his insanely dangerous subcritical reactor scheme, in which case the necessary assumptions to get a factor of 200 would have necessarily been very “special” indeed. Thorium cannot sustain the nuclear chain reaction needed to produce energy on its own. It must first be transmuted to an isotope of uranium with the atomic weight of 233 (U233) by absorbing a neutron. Strictly speaking, then, the above statement is nonsense, because the “energy content” of thorium actually comes from a form of uranium, U233, which can sustain a chain reaction on its own. However, let’s be charitable and compare natural thorium and natural uranium as both come out of the ground when mined. 

As I’ve already pointed out, thorium cannot be directly used in a nuclear reactor on its own.  Natural uranium actually can.  It consists mostly of an isotope of uranium with an atomic weight of 238 (U238), but also a bit over 0.7% of a lighter isotope with an atomic weight of 235 (U235).  U238, like thorium, is unable to support a nuclear chain reaction on its own, but U235, like U233, can.  Technically speaking, what that means is that, when the nucleus of an atom of U233 or U235 absorbs a neutron, enough energy is released to cause the nucleus to split, or fission.  When U238 or natural thorium (Th232) absorbs a neutron, energy is also released, but not enough to cause fission.  Instead, they become U239 and Th233, which eventually decay to produce U233 and plutonium 239 (Pu239) respectively. 

Let’s try to compare apples and apples, and assume that enough neutrons are around to convert all the Th232 to U233, and all the U238 to Pu239.  In that case we are left with a lump of pure U233 derived from the natural thorium and a mixture of about 99.3% Pu239 and 0.7% U235 from the natural uranium.  In the first case, the fission of each atom of U233 will release, on average, 200.1 million electron volts (MeV) of energy that can potentially be converted to heat in a nuclear reactor.  In the second, each atom of U235 will release, on average, 202.5 Mev, and each atom of Pu239 211.5 Mev of energy.  In other words, the potential energy release from natural thorium is actually about equal to that of natural uranium. 

Unfortunately, the “factor of 200” isn’t the only glaring mistake in the paper.  The author repeats the familiar yarn about how uranium was chosen over thorium for power production because it produced plutonium needed for nuclear weapons as a byproduct.  In fact, uranium would have been the obvious choice even if weapons production had not been a factor.  As pointed out earlier, natural uranium can sustain a chain reaction in a reactor on its own, and thorium can’t.  Natural uranium can be enriched in U235 to make more efficient and smaller reactors.  Thorium can’t be “enriched” in that way at all.  Thorium breeders produce U232, a highly radioactive and dangerous isotope, which can’t be conveniently separated from U233, complicating the thorium fuel cycle.  Finally, the plutonium that comes out of nuclear reactors designed for power production, known as “reactor grade” plutonium, contains significant quantities of heavier isotopes of plutonium in addition to Pu239, making it unsuitable for weapons production.

Apparently the author gleaned some further disinformation for  Seth Grae, CEO of Lightbridge, a Virginia-based company promoting thorium power.  He supposedly told her that U233 produced in thorium breeders “fissions almost instantaneously.”  In fact, the probability that it will fission is entirely comparable to that of U235 or Pu239, and it will not fission any more “instantaneously” than other isotopes.  Why Grae felt compelled to feed her this fable is beyond me, as “instantaneous” fission isn’t necessary to prevent diversion of U233 as a weapons material.  Unlike plutonium, it can be “denatured” by mixing it with U238, from which it cannot be chemically separated.

It’s a mystery to me why so much nonsense is persistently associated with discussions of thorium, a potential source of energy that has a lot going for it.  It has several very significant advantages over the alternative uranium/plutonium breeder technology, such as not producing significant quantities of plutonium and other heavy actinides, less danger that materials produced in the fuel cycle will be diverted for weapons purposes if the technology is done right, and the ability to operate in a more easily controlled “thermal” neutron environment.  I can only suggest that people who write popular science articles about nuclear energy take the time to educate themselves about the subject.  Tried and true old textbooks like Introduction to Nuclear Engineering and Introduction to Nuclear Reactor Theory by John Lamarsh have been around for years, don’t require an advanced math background, and should be readable by any intelligent person with a high school education.