The world as I see it
RSS icon Email icon Home icon
  • Fisking a Fusion Fata Morgana

    Posted on April 10th, 2018 Helian 2 comments

    Why is it that popular science articles about fusion energy are always so cringe-worthy? Is scientific illiteracy a prerequisite for writing them? Take the latest one to hit the streets, for example. Entitled Lockheed Martin Now Has a Patent For Its Potentially World Changing Fusion Reactor, it had all the familiar “unlimited energy is just around the corner” hubris we’ve come to expect in articles about fusion. When I finished reading it I wondered whether the author imagined all that nonsense on his own, or some devilish plasma physicist put him up to it as a practical joke. The fun starts in the first paragraph, where we are assured that,

    If this project has been progressing on schedule, the company could debut a prototype system that size of shipping container, but capable of powering a Nimitz-class aircraft carrier or 80,000 homes, sometime in the next year or so.

    Trust me, dear reader, barring divine intervention no such prototype system, capable of both generating electric energy and fitting within a volume anywhere near that of a shipping container, will debut in the next year, or the next five years, or the next ten years.  Reading on, we learn that,

    Unlike in nuclear fission, where atoms hit each other release energy, a fusion reaction involves heating up a gaseous fuel to the point where its atomic structure gets disrupted from the pressure and some of the particles fuse into a heavier nucleus.

    Well, not really.  Fission is caused by free neutrons, not by “atoms hitting each other.”  It would actually be more accurate to say that fusion takes place when “atoms hit each other,” although it’s really the atomic nuclei that “hit” each other.  Fusion doesn’t involve “atomic structure getting disrupted from pressure.” Rather, it happens when atoms acquire enough energy to overcome the Coulomb repulsion between two positively charged atomic nuclei (remember, like charges repel), and come within a sufficiently short distance of each other for the much greater strong nuclear force of attraction to take over. According to the author,

    But to do this you need to be able to hold the gas, which is eventually in a highly energized plasma state, for a protracted period of time at a temperature of hundreds of millions of degrees Fahrenheit.

    This is like claiming that a solid can be in a liquid state. A plasma is not a gas. It is a fourth state of matter quite unlike the three (solid, liquid, gas) that most of us are familiar with. Shortly thereafter we are assured that,

    Running on approximately 25 pounds of fuel – a mixture of hydrogen isotopes deuterium and tritium – Lockheed Martin estimated the notional reactor would be able to run for an entire year without stopping. The device would be able to generate a constant 100 megawatts of power during that period.

    25 pounds of fuel would include about 15 pounds of tritium, a radioactive isotope of hydrogen with a half-life of just over 12 years. In other words, its atoms decay about 2000 times faster than those of the plutonium 239 found in nuclear weapons.  It’s true that the beta particle (electron) emitted in tritium decay is quite low energy by nuclear standards but, as noted in Wiki, “Tritium is an isotope of hydrogen, which allows it to readily bind to hydroxyl radicals, forming tritiated water (HTO), and to carbon atoms. Since tritium is a low energy beta emitter, it is not dangerous externally (its beta particles are unable to penetrate the skin), but it can be a radiation hazard when inhaled, ingested via food or water, or absorbed through the skin.”  Obviously, water and many carbon compounds can be easily inhaled or ingested. Tritium is anything but benign if released into the environment. Here we will charitably assume that the author didn’t mean to say that 25 pounds of fuel would be available all at once, but would be bred gradually and then consumed as fuel in the reactor during operation.  The amount present at any given time would more appropriately be measured in grams than in pounds.  The article continues with rosy scenarios that might have been lifted from a “Back to the Future” movie:

    Those same benefits could apply to vehicles on land, ships at sea, or craft in space, providing nearly unlimited power in compact form allowing for operations across large areas, effectively eliminating the tyranny of distance in many cases. Again, for military applications, unmanned ground vehicles or ships could patrol indefinitely far removed from traditional logistics chains and satellites could conduct long-term, resource intensive activities without the need for large and potentially dangerous fission reactors.

    Great shades of “Dr. Fusion!” Let’s just say that “vehicles on land” is a bit of a stretch. I can only hope that no Lockheed engineer was mean-spirited enough to feed the author such nonsense. Moving right along, we read,

    Therein lies perhaps the biggest potential benefits of nuclear fusion over fission. It’s produces no emissions dangerous to the ozone layer and if the system fails it doesn’t pose nearly the same threat of a large scale radiological incident. Both deuterium and tritium are commonly found in a number of regular commercial applications and are relatively harmless in low doses.

    I have no idea what “emission” of the fission process the author thinks is “dangerous to the ozone layer.” Again, as noted above, tritium is anything but “relatively harmless” if ingested. Next we find perhaps the worst piece of disinformation of all:

    And since a fusion reactor doesn’t need refined fissile material, its much harder for it to serve as a starting place for a nuclear weapons program.

    Good grief, the highly energetic neutrons produced in a fusion reactor are not only capable of breeding tritium, but plutonium 239 and uranium 233 from naturally occurring uranium and thorium as well.  Both are superb explosive fuels for nuclear weapons.  And tritium?  It is used in a process known as “boosting” to improve the performance of nuclear weapons.  Finally, we run into what might be called the Achilles heel of all tritium-based fusion reactor designs:

    Fuel would also be abundant and relatively easy to source, since sea water provides a nearly unlimited source of deuterium, while there are ready sources of lithium to provide the starting place for scientists to “breed” tritium.

    I think not. Breeding tritium will be anything but a piece of cake.  The process will involve capturing the neutrons produced by the fusion reactions in a lithium blanket surrounding the reactor, doing so efficiently enough to generate more tritium from the resulting reactions than the reactor consumes as fuel, and then extracting the tritium and recycling it into the reactor without releasing any of the slippery stuff into the environment.  Do you think the same caliber of engineers who brought us Chernobyl, Fukushima, and Three Mile Island will be able to pull that rabbit out of their hats without a hitch?  If so, you’re more optimistic than I am.

    Hey, I like to be as optimistic about fusion as it’s reasonable to be. I think it’s certainly possible that some startup company with a bright idea will find the magic bullet that makes fusion reactors feasible, preferably involving fusion reactions that don’t involve tritium. It’s also quite possible that the guys at Lockheed will achieve breakeven, although getting a high enough gain of energy in versus energy out to enable efficient generation of electric power is another matter.  There’s a difference between optimism and scientifically illiterate hubris, though.  Is it too much to ask that people who write articles about fusion at least run them by somebody who actually knows something about the subject to see if they pass the “ho, ho” test before publishing?  What’s that you say?  What about me?  Please read the story about the Little Red Hen.

  • The Bomb and the Nuclear Posture Review

    Posted on April 22nd, 2017 Helian No comments

    A Nuclear Posture Review (NPR) is a legislatively mandated review, typically conducted every five to ten years.  It assesses such things as the role, safety and reliability of the weapons in the U.S. nuclear stockpile, the status of facilities in the nuclear weapons complex, and nuclear weapons policy in areas such as nonproliferation and arms control.  The last one was conducted in 2010.  The Trump Administration directed that another one be conducted this year, and the review is already in its initial stages.  It should be finished by the end of the year.  There is reason for concern about what the final product might look like.

    Trump has made statements to the effect that the U.S. should “expand its nuclear capability,” and that, “We have nuclear arsenals that are in very terrible shape.  They don’t even know if they work.”  Such statements have typically been qualified by his aides.  It’s hard to tell whether they reflect serious policy commitments, or just vague impressions based on a few minutes of conversation with some Pentagon wonk.  In fact, there are deep differences of opinion about these matters within the nuclear establishment.  That’s why the eventual content of the NPR might be problematic.  There have always been people within the nuclear establishment, whether at the National Nuclear Security Administration (NNSA), the agency within the Department of Energy responsible for maintaining the stockpile, or in the military, who are champing at the bit to resume nuclear testing.  Occasionally they will bluntly question the reliability of the weapons in our stockpile, even though by that very act they diminish the credibility of our nuclear deterrent.  If Trump’s comments are to be taken seriously, the next NPR may reflect the fact that they have gained the upper hand.  That would be unfortunate.

    Is it really true that the weapons in our arsenal are “in very terrible shape,” and we “don’t even know if they work?”  I doubt it.  In the first place, the law requires that both the Department of Energy and the Department of Defense sign off on an annual assessment that certifies the safety and reliability of the stockpile.  They have never failed to submit that certification.  Beyond that, the weapons in our stockpile are the final product of more than 1000 nuclear tests.  They are both safe and robust.  Any credible challenge to their safety and reliability must cite some plausible reason why they might fail.  I know of no such reason.

    For the sake of argument, let’s consider what might go wrong.  Modern weapons typically consist of a primary and a secondary.  The primary consists of a hollow “pit” of highly enriched uranium or plutonium surrounded by high explosive.  Often it is filled with a “boost” gas consisting of a mixture of deuterium and tritium, two heavy isotopes of hydrogen.  When the weapon is used, the high explosive implodes the pit, causing it to form a dense mass that is highly supercritical.  At the same time, nuclear fusion takes place in the boost gas, producing highly energetic neutrons that enhance the yield of the primary.  At the right moment an “initiator” sends a burst of neutrons into the imploded pit, setting off a chain reaction that results in a nuclear explosion.  Some of the tremendous energy released in this explosion in the form of x-rays then implodes the secondary, causing it, too, to explode, adding to the yield of the weapon.

    What could go wrong?  Of course, explosives are volatile.  Those used to implode the primary might deteriorate over time.  However, these explosives are carefully monitored to detect any such deterioration.  Other than that, the tritium in the boost gas is radioactive, and has a half life of only a little over 12 years.  It will gradually decay into helium, reducing the effectiveness of boosting.  This, too, however is a well understood process, and one which is carefully monitored and compensated for by timely replacement of the tritium.  Corrosion of key parts might occur, but this too, is carefully checked, and the potential sources are well understood.  All these potential sources of uncertainty affect the primary.  However, much of the uncertainty about their effects can be eliminated experimentally.  Of course, the experiments can’t include actual nuclear explosions, but surrogate materials can be substituted for the uranium and plutonium in the pit with similar properties.  The implosion process can then be observed using powerful x-ray or proton beams.  Unfortunately, our experimental capabilities in this area are limited.  We cannot observe the implosion process all the way from the initial explosion to the point at which maximum density is achieved in three dimensions taking “snapshots” at optimally short intervals.  To do that, we would need what has been referred to as an Advanced Hydrodynamic Facility, or AHF.

    We currently have an unmatched suite of above ground experimental facilities for studying the effects of aging on the weapons in our stockpile, including the National Ignition Facility (NIF) at Lawrence Livermore National Laboratory, the Z Machine at Sandia National Laboratories, and the Dual-Axis Radiographic Hydrodynamic Test facility (DARHT) at Los Alamos.  These give us a very significant leg up on the international competition when it comes to maintaining our stockpile.  That is a major reason why it would be foolish for us to resume nuclear testing.  We would be throwing away this advantage.  Unfortunately, while we once seriously considered building an AHF, basically an extremely powerful accelerator, we never got around to doing so.  It was a serious mistake.  If we had such a facility, it would effectively pull the rug out from under the feet of those who want to resume testing.  It would render all arguments to the effect that “we don’t even know if they work” moot.  We could demonstrate with a very high level of confidence that they will indeed work.

    But that’s water under the bridge.  We must hope that cooler heads prevail, and the NPR doesn’t turn out to be a polemic challenging the credibility of the stockpile and advising a resumption of testing.  We’re likely to find out one way or the other before the end of the year.  Keep your fingers crossed.

  • No Ignition at the National Ignition Facility: A Post Mortem

    Posted on March 21st, 2015 Helian No comments

    The National Ignition Facility, or NIF, at Lawrence Livermore National Laboratory (LLNL) in California was designed and built, as its name implies, to achieve fusion ignition.  The first experimental campaign intended to achieve that goal, the National Ignition Campaign, or NIC, ended in failure.  Scientists at LLNL recently published a paper in the journal Physics of Plasmas outlining, to the best of their knowledge to date, why the experiments failed.  Entitled “Radiation hydrodynamics modeling of the highest compression inertial confinement fusion ignition experiment from the National Ignition Campaign,” the paper concedes that,

    The recently completed National Ignition Campaign (NIC) on the National Ignition Facility (NIF) showed significant discrepancies between post-shot simulations of implosion performance and experimentally measured performance, particularly in thermonuclear yield.

    To understand what went wrong, it’s necessary to know some facts about the fusion process and the nature of scientific attempts to achieve fusion in the laboratory.  Here’s the short version:  The neutrons and protons in an atomic nucleus are held together by the strong force, which is about 100 times stronger than the electromagnetic force, and operates only over tiny distances measured in femtometers.  The average binding energy per nucleon (proton or neutron) due to the strong force is greatest for the elements in the middle of the periodic table, and gradually decreases in the directions of both the lighter and heavier elements.  That’s why energy is released by fissioning heavy atoms like uranium into lighter atoms, or fusing light atoms like hydrogen into heavier atoms.  Fusion of light elements isn’t easy.  Before the strong force that holds atomic nuclei together can take effect, two light nuclei must be brought very close to each other.  However, atomic nuclei are all positively charged, and like charges repel.  The closer they get, the stronger the repulsion becomes.  The sun solves the problem with its crushing gravitational force.  On earth, the energy of fission can also provide the necessary force in nuclear weapons.  However, concentrating enough energy to accomplish the same thing in the laboratory has proved a great deal more difficult.

    The problem is to confine incredibly hot material at sufficiently high densities for a long enough time for significant fusion to take place.  At the moment there are two mainstream approaches to solving it:  magnetic fusion and inertial confinement fusion, or ICF.  In the former, confinement is achieved with powerful magnetic lines of force.  That’s the approach at the international ITER fusion reactor project currently under construction in France.  In ICF, the idea is to first implode a small target of fuel material to extremely high density, and then heat it to the necessary high temperature so quickly that its own inertia holds it in place long enough for fusion to happen.  That’s the approach being pursued at the NIF.

    The NIF consists of 192 powerful laser beams, which can concentrate about 1.8 megajoules of light on a tiny spot, delivering all that energy in a time of only a few nanoseconds.  It is much larger than the next biggest similar facility, the OMEGA laser system at the Laboratory for Laser Energetics in Rochester, NY, which maxes out at about 40 kilojoules.  The NIC experiments were indirect drive experiments, meaning that the lasers weren’t aimed directly at the BB-sized, spherical target, or “capsule,” containing the fuel material (a mixture of deuterium and tritium, two heavy isotopes of hydrogen).  Instead, the target was mounted inside of a tiny, cylindrical enclosure known as a hohlraum with the aid of a thin, plastic “tent.”  The lasers were fired through holes on each end of the hohlraum, striking the walls of the cylinder, generating a pulse of x-rays.  These x-rays then struck the target, ablating material from its surface at high speed.  In a manner similar to a rocket exhaust, this drove the remaining target material inward, causing it to implode to extremely high densities, about 40 times heavier than the heaviest naturally occurring elements.  As it implodes, the material must be kept as “cold” as possible, because it’s easier to squeeze and compress things that are cold than those that are hot.  However, when it reaches maximum density, a way must be found to heat a small fraction of this “cold” material to the very high temperatures needed for significant fusion to occur.  This is accomplished by setting off a series of shocks during the implosion process that converge at the center of the target at just the right time, generating the necessary “hot spot.”  The resulting fusion reactions release highly energetic alpha particles, which spread out into the surrounding “cold” material, heating it and causing it to fuse as well, in a “burn wave” that propagates outward.  “Ignition” occurs when the amount of fusion energy released in this way is equal to the energy in the laser beams that drove the target.

    As noted above, things didn’t go as planned.  The actual fusion yield achieved in the best experiment was less than that predicted by the best radiation hydrodynamics computer codes available at the time by a factor of about 50, give or take.  The LLNL paper in Physics of Plasmas discusses some of the reasons for this, and describes subsequent improvements to the codes that account for some, but not all, of the experimental discrepancies.  According to the paper,

    Since these simulation studies were completed, experiments have continued on NIF and have identified several important effects – absent in the previous simulations – that have the potential to resolve at least some of the large discrepancies between simulated and experimental yields.  Briefly, these effects include larger than anticipated low-mode distortions of the imploded core – due primarily to asymmetries in the x-ray flux incident on the capsule, – a larger than anticipated perturbation to the implosion caused by the thin plastic membrane or “tent” used to support the capsule in the hohlraum prior to the shot, and the presence, in some cases, of larger than expected amounts of ablator material mixed into the hot spot.

    In a later section, the LLNL scientists also note,

    Since this study was undertaken, some evidence has also arisen suggesting an additional perturbation source other than the three specifically considered here.  That is, larger than anticipated fuel pre-heat due to energetic electrons produced from laser-plasma interactions in the hohlraum.

    In simple terms, the first of these passages means that the implosions weren’t symmetric enough, and the second means that the fuel may not have been “cold” enough during the implosion process.  Any variation from perfectly spherical symmetry during the implosion can rob energy from the central hot spot, allow material to escape before fusion can occur, mix cold fuel material into the hot spot, quenching it, etc., potentially causing the experiment to fail.  The asymmetries in the x-ray flux mentioned in the paper mean that the target surface would have been pushed harder in some places than in others, resulting in asymmetries to the implosion itself.  A larger than anticipated perturbation due to the “tent” would have seeded instabilities, such as the Rayleigh-Taylor instability.  Imagine holding a straw filled with water upside down.  Atmospheric pressure will prevent the water from running out.  Now imagine filling a perfectly cylindrical bucket with water to the same depth.  If you hold it upside down, the atmospheric pressure over the surface of the water is the same.  Based on the straw experiment, the water should stay in the bucket, just as it did in the straw.  Nevertheless, the water comes pouring out.  As they say in the physics business, the straw experiment doesn’t “scale.”  The reason for this anomaly is the Rayleigh-Taylor instability.  Over such a large surface, small variations from perfect smoothness are gradually amplified, growing to the point that the surface becomes “unstable,” and the water comes splashing out.  Another, related instability, the Richtmeyer-Meshkov instability, leads to similar results in material where shocks are present, as in the NIF experiments.

    Now, with the benefit of hindsight, it’s interesting to look back at some of the events leading up to the decision to build the NIF.  At the time, government used a “key decision” process to approve major proposed projects.  The first key decision, known as Key Decision 0, or KD0, was approval to go forward with conceptual design.  The second was KD1, approval of engineering design and acquisition.  There were more “key decisions” in the process, but after passing KD1, it could safely be assumed that most projects were “in the bag.”  In the early 90’s, a federal advisory committee, known as the Inertial Confinement Fusion Advisory Committee, or ICFAC, had been formed to advise the responsible agency, the Department of Energy (DOE), on matters relating to the national ICF program.  Among other things, its mandate including advising the government on whether it should proceed with key decisions on the NIF project.  The Committee’s advice was normally followed by DOE.

    At the time, there were six major “program elements” in the national ICF program.  These included the three weapons laboratories, LLNL, Los Alamos National Laboratory (LANL), and Sandia National Laboratories (SNL).  The remaining three included the Laboratory for Laser Energetics at the University of Rochester (UR/LLE), the Naval Research Laboratory (NRL), and General Atomics (GA).  Spokespersons from all these “program elements” appeared before the ICFAC at a series of meetings in the early 90’s.  The critical meeting as far as approval of the decision to pass through KD1 is concerned took place in May 1994.  Prior to that time, extensive experimental programs at LLNL’s Nova laser, UR/LLE’s OMEGA, and a host of other facilities had been conducted to address potential uncertainties concerning whether the NIF could achieve ignition.  The best computer codes available at the time had modeled proposed ignition targets, and predicted that several different designs would ignite, typically producing “gains,” the ratio of the fusion energy out to the laser energy in, of from 1 to 10.  There was just one major fly in the ointment – a brilliant physicist named Steve Bodner, who directed the ICF program at NRL at the time.

    Bodner told the ICFAC that the chances of achieving ignition on the NIF were minimal, providing his reasons in the form of a detailed physics analysis.  Among other things, he noted that there was no way of controlling the symmetry because of blow-off of material from the hohlraum wall, which could absorb both laser light and x-rays.  Ablated material from the capsule itself could also absorb laser and x-ray radiation, again destroying symmetry.  He pointed out that codes had raised the possibility of pressure perturbations on the capsule surface due to stagnation of the blow-off material on the hohlraum axis.  LLNL’s response was that these problems could be successfully addressed by filling the hohlraum with a gas such as helium, which would hold back the blow-off from the walls and target.  Bodner replied that such “solutions” had never really been tested because of the inability to do experiments on Nova with sufficient pulse length.  In other words, it was impossible to conduct experiments that would “scale” to the NIF on existing facilities.  In building the NIF, we might be passing from the “straw” to the “bucket.”  He noted several other areas of major uncertainty with NIF-scale targets, such as the possibility of unaccounted for reflection of the laser light, and the possibility of major perturbations due to so-called laser-plasma instabilities.

    In light of these uncertainties, Bodner suggested delaying approval of KD1 for a year or two until these issues could be more carefully studied.  At that point, we may have gained the technological confidence to proceed.  However, I suspect he knew that two years would never be enough to resolve the issues he had raised.  What Bodner really wanted to do was build a much larger facility, known as the Laboratory Microfusion Facility, or LMF.  The LMF would have a driver energy of from 5 to 10 megajoules compared to the NIF’s 1.8.  It had been seriously discussed in the late 80’s and early 90’s.  Potentially, such a facility could be built with Bodner’s favored KrF laser drivers, the kind used on the Nike laser system at NRL, instead of the glass lasers that had been chosen for NIF.  It would be powerful enough to erase the physics uncertainties he had raised by “brute force.”  Bodner’s proposed approach was plausible and reasonable.  It was also a forlorn hope.

    Funding for the ICF program had been cut in the early 90’s.  Chances of gaining approval for a beast as expensive as LMF were minimal.  As a result, it was now officially considered a “follow-on” facility to the NIF.  No one took this seriously at the time.  Everyone knew that, if NIF failed, there would be no “follow-on.”  Bodner knew this, the scientists at the other program elements knew it, and so did the members of the ICFAC.  The ICFAC was composed of brilliant scientists.  However, none of them had any real insight into the guts of the computer codes that were predicting ignition on the NIF.  Still, they had to choose between the results of the big codes, and Bodner’s physical insight bolstered by what were, in comparison, “back of the envelope” calculations.  They chose the big codes.  With the exception of Tim Coffey, then Director of NRL, they voted to approve passing through KD1 at the May meeting.

    In retrospect, Bodner’s objections seem prophetic.  The NIC has failed, and he was not far off the mark concerning the reasons for the failure.  It’s easy to construe the whole affair as a morality tale, with Bodner playing the role of neglected Cassandra, and the LLNL scientists villains whose overweening technological hubris finally collided with the grim realities of physics.  Things aren’t that simple.  The LLNL people, not to mention the supporters of NIF from the other program elements, included many responsible and brilliant scientists.  They were not as pessimistic as Bodner, but none of them was 100% positive that the NIF would succeed.  They decided the risk was warranted, and they may well yet prove to be right.

    In the first place, as noted above, chances that an LMF might be substituted for the NIF after another year or two of study were very slim.  The funding just wasn’t there.  Indeed, the number of laser beams on the NIF itself had been reduced from the originally proposed 240 to 192, at least in part, for that very reason.  It was basically a question of the NIF or nothing.  Studying the problem to death, now such a typical feature of the culture at our national research laboratories, would have led nowhere.  The NIF was never conceived as an energy project, although many scientists preferred to see it in that light.  Rather, it was built to serve the national nuclear weapons program.  It’s supporters were aware that it would be of great value to that program even if it didn’t achieve ignition.  In fact, it is, and is now providing us with a technological advantage that rival nuclear powers can’t match in this post-testing era.  Furthermore, LLNL and the other weapons laboratories were up against another problem – what you might call a demographic cliff.  The old, testing-era weapons designers were getting decidedly long in the tooth, and it was necessary to find some way to attract new talent.  A facility like the NIF, capable of exploring issues in inertial fusion energy, astrophysics, and other non-weapons-related areas of high energy density physics, would certainly help address that problem as well.

    Finally, the results of the NIC in no way “proved” that ignition on the NIF is impossible.  There are alternatives to the current indirect drive approach with frequency-tripled “blue” laser beams.  Much more energy, up to around 4 megajoules, might be available if the known problems of using longer wavelength “green” light can be solved.  Thanks to theoretical and experimental work done by the ICF team at UR/LLE under the leadership of Dr. Robert McCrory, the possibility of direct drive experiments on the NIF, hitting the target directly instead of shooting the laser beams into a “hohlraum” can, was also left open, using a so-called “polar” illumination approach.  Another possibility is the “fast ignitor” approach to ICF, which would dispense with the need for complicated converging shocks to produce a central “hot spot.”  Instead, once the target had achieved maximum density, the hot spot would be created on the outer surface using a separate driver beam.

    In other words, while the results of the NIC are disappointing, stay tuned.  Pace Dr. Bodner, the scientists at LLNL may yet pull a rabbit out of their hats.

    ICF

  • Latest from the National Ignition Facility: More Neutrons, Less Hope

    Posted on February 8th, 2014 Helian No comments

    A paper with some recent ignition experiment results from the National Ignition Facility (NIF) at Lawrence Livermore National Laboratory (LLNL) in California just turned up at Physical Review Letters.  The good news is that they’re seeing more of the neutrons that are released in fusion reactions than ever before, and the yield is in good agreement with computer code predictions.  The bad news is that they improved things by doing something that’s supposed to make things worse.  Specifically, they increasing the energy in the laser “foot” pulse that’s supposed to get the target implosion started.

    The NIF was designed to achieve ignition via inertial confinement fusion (ICF), a process in which the fuel material is compressed and heated to fusion conditions so quickly that its own inertia holds it in place long enough for significant fusion to occur.  Scientists at LLNL  are currently using “indirect drive” in their experiments.  In other words, instead of hitting the BB-sized target directly, they mount it inside of a tiny, cylindrical can, or “hohlraum,” with holes at each end for the laser beams to pass through.  When the beams hit the inside of the hohlraum, they produce a powerful pulse of x-rays, which then hit the target, imploding it to extremely high density.  It’s harder to squeeze and implode hot objects than cold ones, so the laser beams are tailored to keep the target as “cold” as possible during the implosion process.  However, the fuel material must be very hot for fusion to occur.  According to theory, this can be achieved by launching a series of shocks into the imploding target, which must converge in the center at the moment of greatest compression, creating a central “hot spot.”  When fusion reactions start in the hot spot, they produce highly energetic helium atom nuclei (alpha particles), which then slam into the surrounding, still cold, fuel material, heating it to fusion conditions, producing more alpha particles, resulting in an alpha-driven “burn wave,” which moves out through the target, consuming the fuel.

    So far, it hasn’t worked.  Apparently, hydrodynamic instabilities, such as the Rayleigh-Taylor and Richtmyer-Meshkov instabilities, are a big part of the problem.  They amplify tiny target surface imperfections during the implosion process, destroying the symmetry of the implosion, and quenching the fusion process.  There are some interesting simulations of the Rayleigh-Taylor instability on Youtube.  In the latest experiments, the LLNL team managed to control the growth of instabilities by using a bigger target “aspect ratio,” that is, increasing the thickness of the outer shell compared to the target radius, and driving it by dumping more energy into the “foot” pulse.  As a result, they drove the implosion process along a higher “adiabat,” which basically means that the fuel was hotter during the implosion.  Of course, absent instabilities, making the fuel hotter during the implosion is exactly what you don’t want to do.  In spite of that, LLNL is seeing more neutrons.

    What this all boils down to is that LLNL has confirmed that the NIF has a big, potentially fatal problem with hydrodynamic instabilities using the current indirect drive approach to fusion ignition.  That doesn’t mean the situation is hopeless.  There are other approaches.  Examples include direct drive, in which the laser beams are aimed directly at the target, and fast ignitor, in which the cold, compressed fuel material is ignited on the outside, by another laser beam designed specifically for that purpose, rather than in a central “hot spot.”  In fact, the biggest potential problem here is probably more political than scientific.  You certainly have to get ignition if you plan to use inertial fusion as a source of energy, but, in spite of occasional hype to the contrary, the NIF was never intended as an energy project.  It was funded to support the weapons program in general, and to insure the continuing safety and reliability of the weapons in our arsenal in the absence of nuclear testing in particular.  It can do that extremely well, whether we get ignition or not.  The politicians whose support is needed to fund continued operation of the project need to realize that.

    Regardless of whether it achieves ignition or not, the NIF is performing as well as or better than its design specs called for.  The symmetry and synchronization of its 192 laser beams is outstanding, and it has a remarkable and highly capable suite of diagnostics for recording what happens during the experiments.  The NIF can dump so much energy in a small space in a short time that it can generate physical conditions that can be reproduced in the laboratory no where else on earth.  Those conditions approach those that occur inside of an exploding nuclear device.  As a result, such experimental facilities give us a major leg up on the competition as long as there is no resumption of nuclear testing.  In other words, with the NIF and facilities like it we have a strong, positive incentive not to resume testing, potentially losing our advantage.  Without such facilities, the pressure to resume testing may become irresistible.  It’s really an easy choice.

    The simulation of the Rayleigh-Taylor instability below was done by Frederik Brasz.

  • Another Fusion Tease?

    Posted on October 10th, 2013 Helian No comments

    It has always seemed plausible to me that some clever scientist(s) might find a shortcut to fusion that would finally usher in the age of fusion energy, rendering the two “mainstream” approaches, inertial confinement fusion (ICF) and magnetic fusion, obsolete in the process.  It would be nice if it happened sooner rather than later, if only to put a stop to the ITER madness.  For those unfamiliar with the field, the International Thermonuclear Experimental Reactor, or ITER, is a gigantic, hopeless, and incredibly expensive white elephant and welfare project for fusion scientists currently being built in France.  In terms of pure, unabashed wastefulness, think of it as a clone of the International Space Station.  It has always been peddled as a future source of inexhaustible energy.  Trust me, nothing like ITER will ever be economically competitive with alternative energy sources.  Forget all your platitudes about naysayers and “they said it couldn’t be done.”  If you don’t believe me, leave a note to your descendants to fact check me 200 years from now.  They can write a gloating refutation to my blog if I’m wrong, but I doubt that it will be necessary.

    In any case, candidates for the hoped for end run around magnetic and ICF keep turning up, all decked out in the appropriate hype.  So far, at least, none of them has ever panned out.  Enter two stage laser fusion, the latest pretender, introduced over at NextBigFuture with the assurance that it can achieve “10x higher fusion output than using the laser directly and thousands of times better output than hitting a solid target with a laser.”  Not only that, but it actually achieved the fusion of boron and normal hydrogen nuclei, which produces only stable helium atoms.  That’s much harder to achieve than the usual deuterium-tritium fusion between two heavy isotopes of hydrogen, one of which, tritium, is radioactive and found only in tiny traces in nature.  That means it wouldn’t be necessary to breed tritium from the fusion reactions just to keep them going, one of the reasons that ITER will never be practical.

    Well, I’d love to believe this is finally the ONE, but I’m not so sure.  The paper describing the results NBF refers to was published by the journal Nature Communications.  Even if you don’t subscribe, you can click on the figures in the abstract and get the gist of what’s going on.  In the first place, one of the lasers has to accelerate protons to high enough energies to overcome the Coulomb repulsion of the stripped (of electrons) boron nuclei produced by the other laser.  Such laser particle accelerators are certainly practical, but they only work at extremely high power levels.  In other words, they require what’s known in the business as petawatt lasers, capable of achieving powers in excess of a quadrillion (10 to the 15th power) watts.  Power comes in units of energy per unit time, and such lasers generally reach the petawatt threshold by producing a lot of energy in a very, very short time.  Often, we’re talking picoseconds (trillionths of a second).

    Now, you can do really, really cool things with petawatt lasers, such as pulling electron positron pairs right out of the vacuum.  However, their practicality as drivers for fusion power plants, at least in their current incarnation, is virtually nil.  The few currently available, for example, at the University of Rochester’s Laboratory for Laser Energetics, the University of Texas at Austin, the University of Nevada at Reno, etc., are glass lasers.  There’s no way they could achieve the “rep rates” (shot frequency) necessary for useful energy generation.  Achieving lots of fusions, but only for a few picoseconds, isn’t going to solve the world’s energy problems.

    As it happens, conventional accelerators can also be used for fusion.  As a matter of fact, it’s a common way of generating neutrons for such purposes as neutron radiography.  Unfortunately, none of the many fancy accelerator-driven schemes for producing energy that people have come up with over the years has ever worked.  There’s a good physical reason for that.  Instead of using their energy to overcome the Coulomb repulsion of other nuclei (like charges repel, and atomic nuclei are all positively charged), and fuse with them, the accelerated particles prefer to uselessly dump that energy into the electrons surrounding those nuclei.  As a result, it has always taken more energy to drive the accelerators than could be generated in the fusion reactions.  That’s where the “clever” part of this scheme comes in.  In theory, at least, all those pesky electrons are gone, swept away by the second laser.  However, that, too, is an energy drain.  So the question becomes, can both lasers be run efficiently enough and with high enough rep rates and with enough energy output to strip enough boron atoms to get enough of energy out to be worth bothering about, in amounts greater than that needed to drive the lasers?  I don’t think so.  Still, it was a very cool experiment.

  • Phys Rev Letters meets Homo ideologicus

    Posted on September 17th, 2012 Helian No comments

    According to Wikipedia, Physical Review Letters’ “focus is rapid dissemination of significant, or notable, results of fundamental research on all topics related to all fields of physics. This is accomplished by rapid publication of short reports, called ‘Letters'”. That’s what I always thought, so I was somewhat taken aback to find an article in last week’s issue entitled, “Encouraging Moderation: Clues from a Simple Model of Ideological Conflict.” Unfortunately, you can’t see the whole thing without a subscription, but here’s the abstract:

    Some of the most pivotal moments in intellectual history occur when a new ideology sweeps through a society, supplanting an established system of beliefs in a rapid revolution of thought. Yet in many cases the new ideology is as extreme as the old. Why is it then that moderate positions so rarely prevail? Here, in the context of a simple model of opinion spreading, we test seven plausible strategies for deradicalizing a society and find that only one of them significantly expands the moderate subpopulation without risking its extinction in the process.

    That’s physics?! Not according to any of the definitions in my ancient copy of Webster’s Dictionary.  Evidently some new ones have cropped up since it was published, and nobody bothered to inform me.  In any case, tossing in this kind of stuff doesn’t exactly enhance the integrity of the field.  If you don’t have access to the paper, I would not encourage you to visit your local university campus to have a look.  I doubt the effort would be worth it.

    Where should I start?  In the first place, the authors simply assume that “moderate” is to be conflated with “good”, without bothering to offer a coherent definition of “moderate.”  In the context of U.S. politics, for example, the term is practically useless.  People with an ideological ax to grind tend to consider themselves “moderate,” and their opponents “extreme.”  Conservatives refer to the mildest of their opponents as “extreme left wing,” and liberals refer to the most milque-toast of their opponents as “ultra right wing.”  Consider, for example, a post about the Muhammad film flap that just appeared on a website with the moniker, “The Moderate Voice.”  I don’t doubt that it might be termed “moderate” in the academic milieu from which papers such as the one we are discussing usually emanate, but it wouldn’t pass the smell test as such among mainstream conservatives, and has already been dismissed in those quarters as the fumings of the raving extremist hacks of the left.  Back in the 30’s, it was a commonplace and decidedly “moderate” opinion among the authors who contributed articles to The New Republic, the American Mercury, the Atlantic, and the other prestigious intellectual journals of the day was that capitalism was breathing its last, and should be replaced with a socialist system of one stripe or another as soon as possible.  Obviously, what passes as “moderate” isn’t constant, even over relatively short times.  Is the Tea Party Movement moderate?  Certainly not as far as most university professors are concerned, but decidedly so among mainstream conservatives.

    According to the authors, the types of ideological swings they refer to occur in science as well as politics.  One wonders what “moderation” would look like in such cases.  Perhaps the textbooks would inform us that only half the species on earth evolved, and God created the rest, or that, while oxygen is necessary to keep a fire burning, phlogiston is necessary to start one, or that only the most visible stars are imbedded in a crystal ball surrounding the earth known as the “firmament,” while the other half are actually many light years away.

    Undeterred by such considerations, the authors created a simple mathematical model that is supposed to reflect the dynamics of ideological change.  Just as the economic models are all infallible for predicting the behavior of Homo economicus, it is similarly effective at predicting the behavior of what one might call Homo ideologicus.  As for Homo sapiens, not so much.  There is no attempt whatsoever to incorporate even the most elementary aspects of human nature in the model.  It is inhabited by “speakers” and “listeners,” who are identified as either AB, the inhabitants of the moderate middle ground, or A and B, the extemists on either side of it.  For good measure, there is also an Ac, inhabited by “committed” and intransigent followers of A.  The subpopulations in these groups are, in turn, labeled nA, nB, nAB, and p.  Only moderate listeners can be converted to one of the extremes, and vice versa, although we are reliably informed that, for example, the Nazis found some of their most fertile recruiting grounds among the Communists at the opposite extreme, and certainly not just among German moderates.  With the assumptions noted above, and setting aside trivialities such as units of measure, the authors come up with “dynamic equations” such as,

    nA = (p + nA)nAB – nAnB
    nB = nBnAB – (p + nA)n

    There are variations, complete with parameters to account for “stubbornness” and “evangelism.”  There are any number of counterintuitive assumptions implicit in the models, such as that all speakers are equally effective at convincing others to change sides, opinions about given issues are held independently of opinions about other issues, although this is almost never the case among people who care about the issues one way or the other, that a metric for deciding what is the moderate “good” and what the extreme “evil” will always be available to the philosopher kings who apply the models, etc.  The models were tested on “real social networks,” and (surprise, surprise) the curves derived from a judicious choice of nA, nB, etc., were in nice agreement with predictions.

    According to the authors,

    Since we present no formal evidence that the dynamics of (the equations noted above) do actually occur in practice, our work could alternatively be viewed as posing this model and its subsequent generalizations as interesting in their own right.

    While I heartily concur with the first part of the sentence, I suggest that the model and its subsequent generalizations might be of more enduring interest to sociologists than physicists.  Perhaps the editors of Phys Rev Letters and their reviewers will consider that possibility the next time a similar paper is submitted, and kindly direct the authors to a more appropriate journal.

  • Higgs Boson? What’s a Boson?

    Posted on July 7th, 2012 Helian No comments

    It’s been over a century since Max Planck came up with the idea that electromagnetic energy could only be emitted in fixed units called quanta as a means of explaining the observed spectrum of light from incandescent light bulbs. Starting from this point, great physicists such as Bohr, de Broglie, Schrödinger, and Dirac developed the field of quantum mechanics, revolutionizing our understanding of the physical universe. By the 1930’s it was known that matter, as well as electromagnetic energy, could be described by wave equations. In other words, at the level of the atom, particles do not behave at all as if they were billiard balls on a table, or, in general, in the way that our senses portray physical objects to us at a much larger scale. For example, electrons don’t act like hard little balls flying around outside the nuclei of atoms.  Rather, it is necessary to describe where they are in terms of probability distributions, and how they act in terms of wave functions. It is impossible to tell at any moment exactly where they are, a fact formalized mathematically in Heisenberg’s famous Uncertainty Principle. All this has profound implications for the very nature of reality, most of which, even after the passage of many decades, are still unknown to the average lay person. Among other things, it follows from all this that there are two basic types of elementary particles; fermions and bosons. It turns out that they behave in profoundly different ways, and that the idiosyncrasies of neither of them can be understood in terms of classical physics.

    Sometimes the correspondence between mathematics and physical reality seems almost magical.  So it is with the math that predicts the existence of fermions and bosons.  When it was discovered that particles at the atomic level actually behave as waves, a brilliant Austrian scientist named Erwin Schrödinger came up with a now-famous wave equation to describe the phenomenon.  Derived from a few elementary assumptions based on some postulates derived by Einstein and others relating the wavelength and frequency of matter waves to physical quantities such as momentum and energy, and the behavior of waves in general, the Schrödinger equation could be solved to find wave functions.  It was found that these wave functions were complex numbers, that is, they had a real component, and an “imaginary” component that was a multiple of i, the square root of minus one.  For example, such a number might be written down mathematically as x + iy.  Each such number has a complex conjugate, found by changing the sign of the complex term.  The complex conjugate of the above number is, therefore, x – iy.  Max born found that the probability of finding a physical particle at any given point in space and time could be derived from the product of a solution to Schrödinger’s equation and its complex conjugate.

    So far, so good, but eventually it was realized that there was a problem with describing particles in this way that didn’t arise in classical physics; you couldn’t tell them apart!  Elementary particles are, after all, indistinguishable.  One electron, for example, resembles every other electron like so many peas in a pod.  Suppose you could put two electrons in a glass box, and set them in motion bouncing off the walls.  Assuming you had very good eyes, you wouldn’t have any trouble telling the two of them apart if they behaved like classical billiard balls.  You would simply have to watch their trajectories as they bounced around in the box.  However, they don’t behave like billiard balls.  Their motion must be described by wave functions, and wave functions can overlap, making it impossible to tell which wave function belongs to which electron!  Trying to measure where they are won’t help, because the wave functions are changed by the very act of measurement.

    All this was problematic, because if elementary particles really were indistinguishable in that way, they also had to be indistinguishable in the mathematical equations that described their behavior.  As noted above, it had been discovered that the physical attributes of a particle could be determined in terms of the product of a solution to Schrödinger’s equation and its complex conjugate.  Assuming for the moment that the two electrons in the box didn’t collide or otherwise interact with each other, that implies that the solution for the two particle system would depend on the product of the solution for both particles and their complex conjugates.  Unfortunately, the simple product didn’t work.  If the particles were labeled and the labels switched around in the solution, the answer came out different.  The particles were distinguishable!  What to do?

    Well, Schrödinger’s equation has a very useful mathematical property.  It is linear.  What that means in practical terms is that if the products of the wave functions for the two particle system is a solution, then any combination of the products will also be a solution.  It was found that if the overall solution was expressed as the product of the two wave functions plus their product with the labels of the two particles interchanged, or of the product of the two wave functions minus their product with the labels interchanged, the resulting probability density function was not changed by changing around the labels.  The particles remained indistinguishable!

    The solution to the Schrödinger equation, referred to mathematically as an eigenfunction, is called symmetric in the plus case, and antisymmetric in the minus case.  It turns out, however, that if you do the math, particles act in very different ways depending on whether the plus sign or the minus sign is used.  And here’s where the magic comes in.  So far with just been doing math, right?  We’ve just been manipulating symbols to get the math to come out right.  Well, as the great physicist, Richard Feynman, once put it, “To those who do not know mathematics it is difficult to get across a real feeling as to the beauty, the deepest beauty, of nature.” So it is in this case. The real particles act just as the math predicts, and in ways that are completely unexplainable in terms of classical physics!  Particles that can be described by an antisymmetric eigenfunction are called fermions, and particles that can be described by an symmetric eigenfunction are called bosons.

    How do they actually differ?  Well, for reasons I won’t go into here, the so-called exclusion principle applies to fermions.  There can never be more than one of them in exactly the same quantum state.  Electrons are fermions, and that’s why they are arranged in different levels as they orbit the nucleus of an atom.  Bosons behave differently, and in ways that can be quite spectacular.  Assuming a collection of bosons can be cooled to a low enough temperature they will tend to all condense into the same low energy quantum state.  As it happens, the helium atom is a boson.  When it is cooled below a temperature of 2.18 degrees above absolute zero, it shows some very remarkable large scale quantum effects.  Perhaps the weirdest of these is superfluidity.  In this state, it behaves as if it had no viscosity at all, and can climb up the sides of a container and siphon itself out over the top!

    No one really knows what matter is at a fundamental level, or why it exists at all.  However, we do know enough about it to realize that our senses only tell us how it acts at the large scales that matter to most living creatures.  They don’t tell us anything about its essence.  It’s unfortunate that now, nearly a century after some of these wonderful discoveries about the quantum world were made, so few people know anything about them.  It seems to me that knowing about them and the great scientist who made them adds a certain interest and richness to life.  If nothing else, when physicists talk about the Higgs boson, it’s nice to have some clue what they’re talking about.

    Superfluid liquid helium creeping over the edge of a beaker