Posted on July 24th, 2012 No comments
According to a recent press release from Lawrence Livermore National Laboratory (LLNL) in California, the 192-beam National Ignition Facility (NIF) fired a 500 terawatt shot on July 5. The world record power followed a world record energy shot of 1.89 Megajoules on July 3. As news, this doesn’t rise above the “meh” category. A shot at the NIF’s design energy of 1.8 Megajoules was already recorded back in March. It’s quite true that, as NIF Director Ed Moses puts it, “NIF is becoming everything scientists planned when it was conceived over two decades ago.” The NIF is a remarkable achievement in its own right, capable of achieving energies 50 times greater than any other laboratory facility, with pulses shaped and timed to pinpoint precision. The NIF team in general and Ed Moses in particular deserve great credit, and the nation’s gratitude, for that achievement after turning things around following a very shaky start.
The problem is that, while the facility works as well, and even better than planned, the goal it was built to achieve continues to elude us. As its name implies, the news everyone is actually waiting for is the announcement that ignition (defined as fusion energy out greater than laser energy in) has been achieved. As noted in the article, Moses said back in March that “We have all the capability to make it happen in fiscal year 2012.” At this point, he probably wishes his tone had been a mite less optimistic. To reach their goal in the two months remaining, the NIF team will need to pull a rabbit out of their collective hat. A slim chance remains. Apparently the NIF’s 192 laser beams were aimed at a real ignition target with a depleted uranium capsule and deuterium-tritium fuel on July 5, and not a surrogate. The data from that shot may prove to be a great deal more interesting than the 500 terawatt power announcement.
Meanwhile, the Russians are apparently forging ahead with plans for their own superlaser, to be capable of a whopping 2.8 Megajoules, and the Chinese are planning another about half that size, to be operational at about the same time (around 2020). That, in itself, speaks volumes about the real significance of ignition. It may be huge for the fusion energy community, but not that great as far as the weaponeers who actually fund these projects are concerned. Many weapons designers at LLNL and Los Alamos were notably unenthusiastic about ignition when NIF was still in the planning stages. What attracted them more was the extreme conditions, approaching those in an exploding nuke, that could be achieved by the lasers without ignition. They thought, not without reason, that it would be much easier to collect useful information from such experiments than from chaotic ignition plasmas. Apparently the Russian bomb designers agree. They announced their laser project back in February even though LLNL’s difficulties in achieving ignition were well known at the time.
The same can be said of some of the academic types in the NIF “user community.” It’s noteworthy that two of them, Rick Petrasso of MIT and Ray Jeanloz of UC Berkeley, whose enthusiastic comments about the 500 terawatt shot where quoted in the latest press release, are both key players in the field of high energy density physics. Ignition isn’t a sine qua non for them either. They will be able to harvest scores of papers from the NIF whether it achieves ignition or not.
The greatest liability of not achieving early ignition may be the evaporation of political support for the NIF. The natives are already becoming restless. As noted in the Livermore Independent,
In early May, sounding as if it were discussing an engineering project rather than advanced research, the House Appropriations Committee worried that NIF’s “considerable costs will not have been warranted” if it does not achieve ignition by September 30, the end of the federal fiscal year.
Later that month, in a tone that seemed to demand that research breakthroughs take place according to schedule, the House Armed Services Committee recommended that NIF’s ignition research budget for next year be cut by $30 million from the requested $84 million budget unless NIF achieves ignition by September 30.
Funding cuts at this point, after we have come so far, and are so close to the goal, would be short-sighted indeed. One must hope that a Congress capable of squandering billions on white elephants like the International Space Station will not become penny-wise and pound-foolish about funding a project that really matters.
Posted on July 23rd, 2012 No comments
We tend to be strongly influenced by the recent past in our predictions about the future. After World War I, any number of pundits, statesmen, and military officers thought the next war would be a carbon copy of the one they had just lived through, albeit perhaps on a larger scale. The German government’s disastrous decision to declare war in 1914 was likely influenced by the quick and decisive German victories in 1864, 1866, and 1870. The Japanese were similarly mesmerized by their brilliant success against the Russians in 1904-05 after an opening surprise attack against the Russian fleet lying at anchor at Port Arthur, and assumed history would repeat itself if they launched a similar attack against Pearl Harbor.
Sometimes startling events force the reevaluation of old ideas and paradigms, such as the German armored Blitzkrieg or the destruction of powerful battleships from the air in World War II, or, more recently, the sudden collapse of Communism and the Soviet Union from 1989-91. We are always fascinated by such events, yet few of us grasp their significance as they are happening. Our tendency is always to look backwards, to fit the revolutionary and the unprecedented into the old world that we understand rather than the new one that we can’t yet imagine. So it was after the dropping of the first atomic bombs. It certainly focused the attention of public intellectuals, unleashing a torrent of essays full of dire predictions. For many, the future they imagined was simply a continuation of the immediate past, albeit with new and incredibly destructive weapons. It was to include the continued inexorable push for world dominion by totalitarian Communism, centered in the Soviet Union, and world wars following each other in quick succession every 15 to 20 years, about the same as the interval between the first two world wars.
Such a vision of the future was described by James Burnham in “The Struggle for the World,” published in 1947. Burnham was a former Marxist and Trotskyite who eventually abandoned Marxism, and became one of the leading conservative intellectuals of his day. His thought made a deep impression on, among others, George Orwell. For example, he had suggested the possibility of a world dominated by three massive totalitarian states, constantly at war with each other, in an earlier book, “The Managerial Revolution,” published in 1941. These became Oceania, Eastasia, and Eurasia in Orwell’s “1984.” The notions of “doublethink”, the totalitarian use of terms such as “justice” and “peace” in a sense opposite to their traditional meanings, and the rewriting of history every few years “so that history itself will always be a confirmation of the immediate line of the party,” familiar to readers of “1984,” were also recurrent themes in “The Struggle for the World.”
Burnham, born in 1905, had come of age during the stunning period of wars, revolutions, and the birth of the first totalitarian states that began and ended with the world wars of the 20th century. He assumed that events of such global impact would continue at the same pace, only this time in a world with nuclear weapons. As a former Marxist, he knew that the Communists, at least, were deliberately engaged in a “struggle for the world,” and was dismayed that U.S. politicians at the time were so slow to realize the nature of the struggle. He also correctly predicted that, unless they were stopped, the Communists would develop nuclear weapons in their Soviet base “in a few years.” This, he warned, could not be allowed to happen because it would inevitably and quickly lead to a full scale nuclear exchange. His reasoning was as follows:
Let us assume that more than one (two is enough for the assumption) power possesses, and is producing, atomic weapons. Each will be improving the efficiency and destructive potential of the weapons as it goes along. Now let us try to reason as the leaders of these powers would be compelled to reason.
Each leader of Power A could not but think as follows: Power B has at its disposal instruments which could, in the shortest time, destroy us. He has possibly made, or is about to make, new discoveries which will threaten even more complete and rapid destruction. At the moment, perhaps, he shows no open disposition to use these instruments. Nevertheless, I cannot possibly rely on his continued political benevolence – above all since he knows that I also have at my disposal instruments that can destroy him. Some hothead – or some wise statesman – of his may even now be giving the order to push the necessary buttons.
Even if there were no atomic weapons, many of the leaders would undoubtedly be reasoning today along these lines. Atomic weapons are, after all, not responsible for warfare, not even for the Third World War, which has begun. The fact that the political and social causes of a war are abundantly present stares at us from every edition of every newspaper. The existence of atomic weapons merely raises the stakes immeasurably higher, and demands a quicker decision.
But to assume, as do some foolish commentators, that fear of retaliation will be the best deterrent to an atomic war is to deny the lessons of the entire history of war and of society. Fear, as Ferrero so eloquently shows, is what provokes the exercise of force. Most modern wars have been, in the minds of every belligerent, preventive: an effort to stamp out the fear of what the other side might be about to do.
The existence of two or more centers of control of atomic weapons would be equal to a grenade with the pin already pulled.
According to Burnham, the resulting nuclear war or wars would lead to the collapse of Western Civilization. In his words,
If, however, we are not yet ready to accept passively the final collapse of Western Civilization, we may state the following as a necessary first condition of any workable solution of the problem of atomic weapons: there must be an absolute monopoly of the production, possession and use of all atomic weapons.
One wonders what direction world history might have taken had someone like Burnham been President in 1950 instead of Truman. He would have almost certainly adopted MacArthur’s plan to drop numerous atomic bombs on China and North Korea. We were lucky. In the end, Truman’s homespun common sense prevailed over Burnham’s flamboyant intellect, and the nuclear genie remained in the bottle.
However, in 1947 the U.S. still had a monopoly of nuclear weapons, and, for the reasons cited above, Burnham insisted we must keep it. He suggested that this might best be done by establishing an effectual world government, but dismissed the possibility as impractical. The only workable alternative to a Communist conquest of the world or full scale nuclear war and the end of Western Civilization was U.S. hegemony. In Burnham’s words,
It is not our individual minds or desires, but the condition of world society, that today poses for the Soviet Union, as representative of communism, and for the United States, as representative of Western Civilization, the issue of world leadership. No wish or thought of ours can charm this issue away.
This issue will be decided, and in our day. In the course off the decision, both of the present antagonists may, it is true, be destroyed. But one of them must be.
Whatever the words, it is well also to know the reality. The reality is that the only alternative to the communist World Empire is an American Empire which will be, if not literally worldwide in formal boundaries, capable of exercising decisive world control. Nothing less than this can be the positive, or offensive, phase of a rational United States policy.
As a first step to empire, Burnham proposed the union of Great Britain and the United States, to be followed, not by outright conquest, but by firm assertion of U.S. predominance and leadership in the non-Communist world. Beyond that, the Communist threat must finally be recognized for what it was, and a firm, anti-Communist policy substituted for what was seen as a lack of any coherent policy at all. Vacillation must end.
Fortunately, when it came to the nuclear standoff, Burnham was wrong, and the “foolish commentators” who invoked the fear of retaliation were right. Perhaps, having only seen the effects of dropping two low yield bombs, he could not yet imagine the effect of thousands of bombs orders of magnitude more powerful, or conceive of such a thing as mutually assured destruction. Perhaps it was only dumb luck, but the world did not stumble into a nuclear World War III as it had into the conventional world wars of the 20th century, and the decisive events in the struggle did not follow each other nearly as quickly as Burnham imagined they would.
Burnham also failed to foresee the implications of the gradual alteration in the nature of the Communist threat. At the time he wrote, it was everything he claimed it to be, a messianic secular religion at the height of its power and appeal. He assumed that it would retain that power and appeal until the battle was decided, one way or the other. Even though he was aware that the masses living under Communism, other than a dwindling number of incorrigible idealists, were already disillusioned by “the God that failed,” he didn’t foresee what a decisive weakness that would eventually become. In the end, time was on our side. The Communists, and not we, as Lenin had predicted, finally dropped onto the garbage heap of history “like a ripe plum.”
However, Burnham wasn’t wrong about everything. To win the struggle, it was necessary for us to finally recognize the threat. Whatever doubt remained on that score, at least as far as most of our political leaders were concerned, was dissipated by the North Korean invasion of the south. Our policy of vacillation didn’t exactly end, but was occasionally relieved by periods of firmness. In the end, in spite of a media dominated through most of the struggle by Lenin’s “useful idiots” and the resultant cluelessness of most Americans about what we were even trying to do on the front lines of the “clash between the cultures” in places like Vietnam, we prevailed.
It was a near thing. Burnham feared that, even after losing the opening battles of the next war to a United States with a monopoly of nuclear weapons, the Communists might regroup, abandon their vulnerable cities, and transform the struggle into a “people’s war.” His description of what would follow was eerily similar to what actually did happen, but in a much smaller arena than the whole world:
They would transform the struggle into a political war, a “people’s war,” fought in every district of the world by irregulars, partisans, guerillas, Fifth Columns, spies, stool pigeons, assassins, fought by sabotage and strikes and lies and terror and diversion and panic and revolt. They would play on every fear and prejudice of the United States population, every feeling of guilt or nobility; they would exploit every racial and social division; they would widen every antagonism between tentative allies; and they would tirelessly wear down the United States will to endure.
Though the result would be not quite so certain, perhaps, as if the communists also had atomic weapons, they would in the end, I think, succeed. Because of the lack of a positive United States policy, because it would not have presented to the world even the possibility of a political solution, its dreadful material strength would appear to the peoples as the unrelieved brutality of a murderer. Its failure to distinguish between the communist regime and that regime’s subject-victims would weld together the victims and their rulers. Americans themselves would be sickened and conscience-ridden by what would seem to them a senseless slaughter, never-ending, leading nowhere. The military leadership would be disoriented by the inability of their plans based on technical superiority to effect a decision. The failure to conceive the struggle politically would have given the communists the choice of weapons. From the standpoint of the United States, the entire world would have been turned into an ambush and a desert. In the long night, nerves would finally crack, and sentries would fire their last shots wildly into the darkness, and it would all be over.
Change “the world” to Vietnam and it reads like a history instead of a premonition. Tomorrow is another day, and I doubt that any of us will prove better at predicting what the future will bring than Burnham. We have lived through an era much different, more peaceful, and more sedate in the pace of events than the one he experienced between 1914 and 1945. We should beware of assuming, as he did, that the future will bear any resemblance to the immediate past. The world is still full of nuclear weapons, some of them already in the hands of, or soon to be in the hands of, dictators of suspect rationality. Some of our intellectuals soothe our fears with stories about the “vanishing of violence,” but as Omar Khayyam put it in the “Rubaiyat,” they could soon be “cast as foolish prophets forth, their mouths stopped with dust,” through some miscalculation or deliberate act of malice. As the Boy Scouts say, “be prepared.”
Posted on July 11th, 2012 1 comment
And you thought I was crazy… Check out this article by Freeman Dyson in the October 1968 issue of Physics Today entitled “Interstellar Transport.” Dyson was an active participant in Project Orion, a program to build interplanetary space vehicles propelled by nuclear bombs. After the program was ended by the 1963 nuclear test ban treaty, he decided to write a paper for a high visibility journal to insure that the idea was kept alive and people were aware of its potential.
People thought big in those days, and Dyson’s notional interstellar transports certainly reflected the fact. The first was designed to absorb the blast of one megaton deuterium fueled bombs in a gigantic copper hemisphere with a radius of 10 kilometers weighing 5 million tons. The fully loaded ship would have weighed 40 million tons, including 30 million of the one megaton bombs. Assuming each bomb would require 10 pounds of plutonium (or about 60 pounds of highly enriched uranium), a total of 150,000 tons of plutonium would be required for the mission.
Dubious assumptions were made, as, for example, that 100% of the bomb’s energy would go into the kinetic energy of debris, even though it was known at the time (and certainly known to Dyson), that the actual fraction is much less than that. The cost was calculated to be one 1968 gross national product, based entirely on the projected cost of the necessary deuterium fuel (3 billion pounds at $200 per pound in 1968 dollars, for a total of $600 billion.) In other words, the cost of the plutonium, copper, and other building material wasn’t even factored in, nor was the cost of getting it all into earth orbit prior to launch. In spite of all this, the massive ship, carrying about 20,000 colonists, would still take about 1300 years to reach the nearest stars. Barring a “Noah’s ark” forlorn hope escape from a dying world, even Dyson considered this impractical for human travel, writing,
As a voyage of colonization a trip as slow as this does not make much sense on a human time scale. A nonhuman species, longer lived or accustomed to thinking in terms of millennia rather than years, might find the conditions acceptable.
To obviate some of the objections of this “conservative” design, Dyson also proposed an “optimistic” design, which allowed some ablation of the surface of the vehicle nearest to the explosions, rather than requiring all the energy to be absorbed in solid material. After removing this energy limitation, the main limitation on the ship’s performance would be imposed by momentum, or, as Dyson put it, “the capacity of shock absorbers to transfer momentum from an impulsively accelerated pusher plate to the smoothly accelerated ship.” Basing his reasoning on the optimum performance of practical shock absorbers, Dyson calculated that such a ship could be accelerated at a constant one g, enabling it to reach the nearest stars in centuries rather than millennia. The cost, again based solely on the value of the deuterium fuel, would be only $60 billion 1968 dollars, or a tenth of the GNP at that time. The weight of the ship would be “only” 400,000 tons, a factor of 100 less than that of the “conservative” design. Dyson concluded,
If we continue our 4% growth rate we will have a GNP a thousand times its present size in about 200 years. When the GNP is multiplied by 1000, the building of a ship for $100B will seem like building a ship for $100M today. We are now building a fleet of Saturn V which cost about $100M each. It may be foolish but we are doing it anyhow. On this basis, I predict that about 200 years from now, barring a catastrophe, the first interstellar voyages will begin.
I suspect Dyson wrote most of this paper “tongue in cheek.” He’s nobody’s fool, has remarkable achievements to his credit in fields such as quantum electrodynamics, solid state physics, and nuclear engineering, and remains highly regarded by his peers. Nobel laureate Steven Weinberg said that the Nobel Committee had “fleeced” Dyson by never awarding him the prize. The objections to his designs are obvious, but for all that, bomb-propelled space vehicles are by no means impractical. I suspect Dyson realized that other scientists would recognize ways they could improve on his “conservative” and “optimistic” designs as soon as they read the paper, and start thinking about their own versions. Project Orion might be dead as a budget line item, but would live on in the minds and imaginations of his peers. And so it did.
Posted on June 10th, 2012 2 comments
As I mentioned in a previous post about fusion progress, signs of life have finally been appearing in scientific journals from the team working to achieve fusion ignition at the National Ignition Facility, or NIF, located at Lawrence Livermore National Laboratory (LLNL) in California. At the moment they are “under the gun,” because the National Ignition Campaign (NIC) is scheduled to end with the end of the current fiscal year on September 30. At that point, presumably, work at the facility will be devoted mainly to investigations of nuclear weapon effects and physics, which do not necessarily require fusion ignition. Based on a paper that recently appeared in Physical Review Letters, chances of reaching the ignition goal before that happens are growing dimmer.
The problem has to do with a seeming contradiction in the physical requirements for fusion to occur in the inertial confinement approach pursued at LLNL. In the first place, it is necessary for the NIF’s 192 powerful laser beams to compress, or implode, a target containing fusion fuel in the form of two heavy isotopes of hydrogen to extremely high densities. It is much easier to compress materials that are cold than those that are hot. Therefore, it is essential to keep the fuel material as cold as possible during the implosion process. In the business, this is referred to as keeping the implosion on a “low adiabat.” However, for fusion ignition to occur, the nuclei of the fuel atoms must come extremely close to each other. Unfortunately, they’re not inclined to do that, because they’re all positively charged, and like charges repel. How to overcome the repulsion? By making the fuel material extremely hot, causing the nuclei to bang into each other at high speed. The whole trick of inertial confinement fusion, then, is to keep the fuel material very cold, and then, in a tiny fraction of a second, while its inertia holds it in place (hence the name, “inertial” confinement fusion), raise it, or at least a small bit of it, to the extreme temperatures necessary for the fusion process to begin.
The proposed technique for creating the necessary hot spot was always somewhat speculative, and more than one fusion expert at the national laboratories were dubious that it would succeed. It consisted of creating a train of four shocks during the implosion process, which were to overtake one another all at the same time precisely at the moment of maximum compression, thereby creating the necessary hot spot. Four shocks are needed because of well-known theoretical limits on the increase in temperature that can be achieved with a single shock. Which brings us back to the paper in Physical Review Letters.
The paper, entitled Precision Shock Tuning on the National Ignition Facility, describes the status of efforts to get the four shocks to jump through the hoops described above. One cannot help but be impressed by the elegant diagnostic tools used to observe and measure the shocks. They are capable of peering through materials under the extreme conditions in the NIF target chamber, focusing on the tiny, imploded target core, and measuring the progress of a train of shocks over a period that only lasts for a few billionths of a second! These diagnostics, developed with the help of another team of brilliant scientists at the OMEGA laser facility at the University of Rochester’s Laboratory for Laser Energetics, are a triumph of human ingenuity. They reveal that the NIF is close to achieving the ignition goal, but not quite close enough. As noted in the paper, “The experiments also clearly reveal an issue with the 4th shock velocity, which is observed to be 20% slower than predictions from numerical simulation.”
It will be a neat trick indeed if the NIF team can overcome this problem before the end of the National Ignition Campaign. In the event that they don’t, one must hope that the current administration is not so short-sighted as to conclude that the facility is a failure, and severely reduce its funding. There is too much at stake. I have always been dubious about the possibility that either the inertial or magnetic approach to fusion will become a viable source of energy any time in the foreseeable future. However, I may be wrong, and even if I’m not, achieving inertial fusion ignition in the laboratory may well point the way to as yet undiscovered paths to the fusion energy goal. Ignition in the laboratory will also give us a significant advantage over other nuclear weapons states in maintaining our arsenal without nuclear testing.
Based on the progress reported to date, there is no basis for the conclusion that ignition is unachievable on the NIF. Even if the central hot spot approach currently being pursued proves too difficult, there are alternatives, such as polar direct drive and fast ignition. However, pursuing these alternatives will take time and resources. They will become a great deal more difficult to realize if funding for NIF operations is severely cut. It will also be important to maintain the ancillary capability provided by the OMEGA laser. OMEGA is much less powerful but also a good deal more flexible and nimble than the gigantic NIF, and has already proved its value in testing and developing diagnostics, investigating novel experimental approaches to fusion, developing advanced target technology, etc.
We have built world-class facilities. Let us persevere in the quest for fusion. We cannot afford to let this chance slip.
Posted on April 17th, 2012 10 comments
The National Ignition Facility, or NIF, is a huge, 192 beam laser system, located at Lawrence Livermore National Laboratory in California. It was designed, as the name implies, to achieve thermonuclear ignition in the laboratory. “Ignition” is generally accepted to mean getting a greater energy output from fusion than the laser input energy. Unlike magnetic confinement fusion, the approach currently being pursued at the International Thermonuclear Experimental Reactor, or ITER, now under construction in France, the goal of the NIF is to achieve ignition via inertial confinement fusion, or ICF, in which the fuel material is compressed and heated to the extreme conditions at which fusion occurs so quickly that it is held in place by its own inertia.
The NIF has been operational for over a year now, and a two year campaign is underway with the goal of achieving ignition by the end of this fiscal year. Recently, there has been a somewhat ominous silence from the facility, manifesting itself as a lack of publications in the major journals favored by fusion scientists. That doesn’t usually happen when there is anything interesting to report. Finally, however, some papers have turned up in the journal Physics of Plasmas, containing reports of significant progress.
To grasp the importance of the papers, it is necessary to understand what is supposed to occur within the NIF target chamber for fusion to occur. Of course, just as in magnetic fusion, the goal is to bring a mixture of deuterium and tritium, two heavy isotopes of hydrogen, to the extreme conditions at which fusion takes place. In the ICF approach, this hydrogen “fuel” is contained in a tiny, BB-sized target. However, the lasers are not aimed directly at the fuel “capsule.” Instead, the capsule is suspended in the middle of a tiny cylinder made of a heavy metal like gold or uranium. The lasers are fired through holes on each end of the cylinder, striking the interior walls, where their energy is converted to x-rays. It is these x-rays that must actually bring the target to fusion conditions.
It was recognized many years ago that one couldn’t achieve fusion ignition by simply heating up the target. That would require a laser driver orders of magnitude bigger than the NIF. Instead, it is first necessary to compress, or implode, the fuel material to extremely high density. Obviously, it is harder to “squeeze” hot material than cold material to the necessary high densities, so the fuel must be kept as “cold” as possible during the implosion process. However, cold fuel won’t ignite, begging the question of how to heat it up once the necessary high densities have been achieved.
It turns out that the answer is shocks. When the laser generated x-rays hit the target surface, they do so with such force that it begins to implode faster than the speed of sound. Everyone knows that when a plane breaks the sound barrier, it, too, generates a shock, which can be heard as a sonic boom. The same thing happens in ICF fusion targets. When such a shock converges at the center of the target, the result is a small “hot spot” in the center of the fuel. If the temperature in the hot spot were high enough, fusion would occur. Each fusion reaction would release a high energy helium nucleus, or alpha particle, and a neutron. The alpha particles would be slammed to a stop in the surrounding cold fuel material, heating it, in turn, to fusion conditions. This would result in a fusion “burn wave” that would propagate out through the rest of the fuel, completing the fusion process.
The problem is that one shock isn’t enough to create such a “hot spot.” Four of them are required, all precisely timed by the carefully tailored NIF laser pulse to converge at the center of the target at exactly the same time. This is where real finesse is needed in laser fusion. The implosion must be extremely symmetric, or the shocks will not converge properly. The timing must be exact, and the laser pulse must deliver just the right amount of energy.
One problem in the work to date has been an inability to achieve high enough implosion velocities for the above scenario to work as planned. One of the Physics of Plasmas papers reports that, by increasing the laser energy and replacing some of the gold originally used in the wall of the cylinder, or “hohlraum,” in which the fuel capsule is mounted with depleted uranium, velocities of 99% of those required for ignition have been achieved. In view of the recent announcement that a shot on the NIF had exceeded its design energy of 1.8 megajoules, it appears the required velocity is within reach. Another of the Physics of Plasmas papers dealt with the degree to which implosion asymmetries were causing harmful mixing of the surrounding cold fuel material into the imploded core of the target. It, too, provided grounds for optimism.
In the end, I suspect the success or failure of the NIF will depend on whether the complex sequence of four shocks can really be made to work as advertised. That will depend on the accuracy of the physics algorithms in the computer codes that have been used to model the experiments. Time and again, earlier and less sophisticated codes have been wrong because they didn’t accurately account for all the relevant physics. There is no guarantee that critical phenomena have not been left out of the current versions as well. We may soon find out, if the critical series of experiments planned to achieve ignition before the end of the fiscal year are carried out as planned.
One can but hope they will succeed, if only because some of our finest scientists have dedicated their careers to the quest to achieve the elusive goal of controlled fusion. Even if they do, fusion based on the NIF approach is unlikely to become a viable source of energy, at least in the foreseeable future. Laser fusion may prove scientifically feasible, but getting useful energy out of it will be an engineering nightmare, dangerous because of the need to rely on highly volatile and radioactive tritium, and much too expensive to compete with potential alternatives. I know many of the faithful in the scientific community will beg to differ with me, but, trust me, laser fusion energy aint’ gonna happen.
On the other hand, if ignition is achieved, the NIF will be invaluable to the country, not as a source of energy, but for the reason it was funded in the first place – to insure that our nation has an unmatched suite of experimental facilities to study the physics of nuclear weapons in a era free of nuclear testing. As long as we have unique access to facilities like the NIF, which can approach the extreme physical conditions within exploding nukes, we will have a significant leg up on the competition as long as the test ban remains in place. For that, if for no other reason, we should keep our fingers crossed that the NIF team can finally clear the last technical hurdles and reach the goal they have been working towards for so long.
Posted on January 19th, 2011 No comments
For those who don’t follow fusion technology, the National Ignition Facility, or NIF, is a giant, 192 beam laser facility located at Lawrence Livermore National Laboratory. As its name would imply, it is designed to achieve fusion ignition, which has been variously defined, but basically means that you get more energy out from the fusion process than it was necessary to pump into the system to set off the fusion reactions. There are two “classic” approaches to achieving controlled fusion in the laboratory. One is magnetic fusion, in which light atoms stripped of their electrons, or ions, typically heavy isotopes of hydrogen, are confined in powerful magnetic fields as they are heated to the temperatures necessary for fusion to occur. The other is inertial confinement fusion, or ICF, in which massive amounts of energy are dumped into a small target, causing it to reach fusion conditions so rapidly that significant fusion can occur in the very short time that the target material is held in place by its own inertia. The NIF is a facility of the latter type.
There are, in turn, two basic approaches to ICF. In one, referred to as direct drive, the target material is directly illuminated by the laser beams. In the other, indirect drive, the target is placed inside a small container, or “hohlraum,” with entrance holes for the laser beams. These are aimed at the inside walls of the hohlraum, where they are absorbed, producing x-rays which then compress and ignite the target. The NIF currently uses the latter approach.
The NIF was completed and became operational in 2009. Since that time, the amount of news coming out of the facility about the progress of experiments has been disturbingly slight. That is not a good thing. If everything were working as planned, a full schedule of ignition experiments would be underway as I write this. Instead, the facility is idle. The results of the first experimental campaign, announced in January, sounded positive. The NIF had operated at a large fraction of its design energy output of 1.8 Megajoules. Surrogate targets had been successfully compressed to very high densities in symmetric implosions, as required for fusion. However, on reading the tea leaves, things did not seem quite so rosy. Very high levels of laser plasma interaction (LPI) had been observed. In such complex scattering interactions, laser light can be scattered out of the hohlraum, or in other undesired directions, and hot electrons can be generated, wreaking havoc with the implosion process by preheating the target. We were assured that ways had been found to control the excess LPI, and even turn it to advantage in controlling the symmetry of the implosion. However, such “tuning” with LPI had not been foreseen at the time the facility was designed, and little detail was provided on how the necessary delicate, time-dependent shaping of the laser pulses would be achieved under such conditions.
After a long pause, another series of “integrated” experiments was announced in October. Even less information was released on this occasion. We were informed that symmetric implosions had been achieved, and that, “From both a system integration and from a physics point of view, this experiment was outstanding,” Since then, nothing.
It’s hard to imagine that the outlook is really as rosy as the above statement would imply. The NIF was designed for a much higher shot rate. If it sat idle through much of 2010, there must be a reason. It could be that damage to the laser optics has been unexpectedly high. This would not be surprising. Delicate crystals are used at the end of the chain of laser optics to triple the frequency of the laser light, and, given that the output energy of the facility is more than an order of magnitude larger than that of its next largest competitor, damage may have occurred in unexpected ways, as it did on Nova, the NIF’s predecessor at Livermore. LPI may, in fact, be more serious, more difficult to control, and more damaging than the optimistic accounts in January implied. Unexpected physics may be occurring in the absorption of laser light at the hohlraum walls. Whatever the problem, Livermore would be well advised to be forthcoming about it in its press releases. After all, the NIF will achieve ignition or not, regardless of how well the PR is managed.
All this seems very discouraging for the scientists who have devoted their careers to the quest for fusion energy, not to mention the stewards of the nation’s nuclear weapons stockpile, whose needs the NIF was actually built to address. In the end, these apparent startup problems may be overcome, and ignition achieved after all. However, I rather doubt it, unless perhaps Livermore comes up with an alternative to its indirect drive approach.
Posted on November 18th, 2010 3 comments
The Reliable Replacement Warhead is a really bad idea that never seems to go away. Congress has wisely condemned it, and it was explicitly rejected in the nation’s latest Nuclear Posture Review, but now the RRW has popped up again, artificially linked to the New Start arms control treaty, in a couple of opeds, one in the New York Times by former UN ambassador John Bolton, and another in the Wall Street Journal by R. James Woolsey, former arms control negotiator and Director of the CIA. Bolton writes, “Congress should pass a new law financing the testing and development of new warhead designs before approving New Start,” and Woolsey chimes in,
…the administration needs to commit to replacing and modernizing our aging nuclear infrastructure as well as the bombers, submarines and ballistic missiles – and the warheads on them – that provide our ultimate guarantee of national security. The Senate’s resolution of ratification should, for example, require the president to commit to specific modernization plans so we can be sure these programs will have his full support. The administration has particularly resisted warhead modernization, beginning with its Nuclear Posture Review last year. This led 10 former directors of the nation’s nuclear weapons labs to write to the secretaries of Defense and Energy urging them to revisit that misguided policy. The secretaries should commit to doing so.
In fact, one hopes they have enough sense not to follow that advice. What Bolton and Woolsey are referring to when they speak of “modernizing” weapons isn’t the continued refurbishment of old weapons, or the adding of new conventional packaging around them, as in the case of the B61-11, to make them more effective for earth penetration or some other specific mission. They are speaking of a new design of the nuclear device itself. At the moment, the RRW is the only player in that game.
Going ahead with the RRW would be self-destructive at a number of levels. In the first place, it’s unnecessary. There is no reason to doubt the safety and reliability of the existing weapons in our arsenal, nor our ability to maintain them into the indefinite future. A reason given for building the RRW is that low yield versions could be designed that would be “more effective deterrents,” because enemies would consider it a lot more likely that we would actually use such a weapon against them, as opposed to our existing high yield weapons. The problem with that logic is that they would be right. Given the alacrity with which we went to war in Iraq, it is not hard to imagine that we would be sorely tempted to use a mini-nuke to take out, say, a buried and/or hardened enemy bunker suspected of containing WMD’s. Any US first use of nuclear weapons, for whatever reason, and regardless of the chances of “collateral damage,” would be a disastrous mistake. It would let the nuclear genie out of the bottle once again, serving as a perfect pretense for the use of nuclear weapons by others, and particularly by terrorists against us. Those who think the Maginot line of nuclear detectors we are installing at our ports, or the imaginary difficulty of mastering the necessary technology, will protect us from such an eventuality, are gravely mistaken.
The building of a new weapon design would also provide a fine excuse for others to modernize their own arsenals. It is hard to imagine how this could work to the advantage of the United States. Our nuclear technology is mature, and it would simply give the lesser nuclear powers a chance to catch up with us. More importantly, it would almost inevitably imply a return to nuclear testing, thereby negating a tremendous advantage we now hold over every other nuclear power, namely, our above ground experimental (AGEX) capability. In the National Ignition Facility at Lawrence Livermore National Laboratory, the Z pulsed power machine at Sandia, the DAHRT radiographic test facility at Los Alamos, and a host of other experimental facilities, we possess an ability to study the physics that occurs in conditions near those in nuclear detonations that no other country comes close to matching. It would be utterly pointless to throw that advantage away in order to build a new nuclear weapon we don’t need.
It does not surprise me that 10 former directors of the nation’s nuclear weapons laboratories signed a letter calling on the Secretaries of Energy and Defense to revisit our RRW policy. It would certainly serve the interests of the nuclear weapons laboratories. It is much easier to attract talented physicists to an active testing program than to serve as custodians of an aging stockpile, and new designs would mean new money, and the removal of any perceived existential threats to one or more of the existing labs on the basis of their redundancy. The problem is that it would not serve the interests of the country.
Let the RRW stay buried. The nuclear genie will return soon enough as it is.
Posted on October 23rd, 2010 8 comments
Thorium is a promising candidate as a future source of energy. I just wonder what it is about the stuff that inspires so many people to write nonsense about it. It doesn’t take a Ph.D. in physics to spot the mistakes. Most of them should be obvious to anyone who’s taken the trouble to read a high school science book. Another piece of misinformation has just turned up at the website of Popular Mechanics, dubiously titled The Truth about Thorium and Nuclear Power.
The byline claims that, “Thorium has nearly 200 times the energy content of uranium,” a statement I will assume reflects the ignorance of the writer rather than any outright attempt to deceive. She cites physicist Carlo Rubbia as the source, but if he ever said anything of the sort, he was making some very “special” assumptions about the energy conversion process that she didn’t quite understand. I assume it must have had something to do with his insanely dangerous subcritical reactor scheme, in which case the necessary assumptions to get a factor of 200 would have necessarily been very “special” indeed. Thorium cannot sustain the nuclear chain reaction needed to produce energy on its own. It must first be transmuted to an isotope of uranium with the atomic weight of 233 (U233) by absorbing a neutron. Strictly speaking, then, the above statement is nonsense, because the “energy content” of thorium actually comes from a form of uranium, U233, which can sustain a chain reaction on its own. However, let’s be charitable and compare natural thorium and natural uranium as both come out of the ground when mined.
As I’ve already pointed out, thorium cannot be directly used in a nuclear reactor on its own. Natural uranium actually can. It consists mostly of an isotope of uranium with an atomic weight of 238 (U238), but also a bit over 0.7% of a lighter isotope with an atomic weight of 235 (U235). U238, like thorium, is unable to support a nuclear chain reaction on its own, but U235, like U233, can. Technically speaking, what that means is that, when the nucleus of an atom of U233 or U235 absorbs a neutron, enough energy is released to cause the nucleus to split, or fission. When U238 or natural thorium (Th232) absorbs a neutron, energy is also released, but not enough to cause fission. Instead, they become U239 and Th233, which eventually decay to produce U233 and plutonium 239 (Pu239) respectively.
Let’s try to compare apples and apples, and assume that enough neutrons are around to convert all the Th232 to U233, and all the U238 to Pu239. In that case we are left with a lump of pure U233 derived from the natural thorium and a mixture of about 99.3% Pu239 and 0.7% U235 from the natural uranium. In the first case, the fission of each atom of U233 will release, on average, 200.1 million electron volts (MeV) of energy that can potentially be converted to heat in a nuclear reactor. In the second, each atom of U235 will release, on average, 202.5 Mev, and each atom of Pu239 211.5 Mev of energy. In other words, the potential energy release from natural thorium is actually about equal to that of natural uranium.
Unfortunately, the “factor of 200” isn’t the only glaring mistake in the paper. The author repeats the familiar yarn about how uranium was chosen over thorium for power production because it produced plutonium needed for nuclear weapons as a byproduct. In fact, uranium would have been the obvious choice even if weapons production had not been a factor. As pointed out earlier, natural uranium can sustain a chain reaction in a reactor on its own, and thorium can’t. Natural uranium can be enriched in U235 to make more efficient and smaller reactors. Thorium can’t be “enriched” in that way at all. Thorium breeders produce U232, a highly radioactive and dangerous isotope, which can’t be conveniently separated from U233, complicating the thorium fuel cycle. Finally, the plutonium that comes out of nuclear reactors designed for power production, known as “reactor grade” plutonium, contains significant quantities of heavier isotopes of plutonium in addition to Pu239, making it unsuitable for weapons production.
Apparently the author gleaned some further disinformation for Seth Grae, CEO of Lightbridge, a Virginia-based company promoting thorium power. He supposedly told her that U233 produced in thorium breeders “fissions almost instantaneously.” In fact, the probability that it will fission is entirely comparable to that of U235 or Pu239, and it will not fission any more “instantaneously” than other isotopes. Why Grae felt compelled to feed her this fable is beyond me, as “instantaneous” fission isn’t necessary to prevent diversion of U233 as a weapons material. Unlike plutonium, it can be “denatured” by mixing it with U238, from which it cannot be chemically separated.
It’s a mystery to me why so much nonsense is persistently associated with discussions of thorium, a potential source of energy that has a lot going for it. It has several very significant advantages over the alternative uranium/plutonium breeder technology, such as not producing significant quantities of plutonium and other heavy actinides, less danger that materials produced in the fuel cycle will be diverted for weapons purposes if the technology is done right, and the ability to operate in a more easily controlled “thermal” neutron environment. I can only suggest that people who write popular science articles about nuclear energy take the time to educate themselves about the subject. Tried and true old textbooks like Introduction to Nuclear Engineering and Introduction to Nuclear Reactor Theory by John Lamarsh have been around for years, don’t require an advanced math background, and should be readable by any intelligent person with a high school education.
Posted on September 1st, 2010 18 comments
The Telegraph (hattip Insty) turned the hype level to max in a recent article about the potential of thorium reactors. According to the headline, “Obama could kill fossil fuels overnight with a nuclear dash for thorium.” Against all odds, this is to happen in three to five years with a “new Manhattan Project,” and a “silver bullet” in the form of a new generation of thorium reactors. The author is so vague about the technologies he’s describing that it’s hard to avoid the conclusion that he simply doesn’t know what he’s talking about, and couldn’t be bothered to spend a few minutes with Google to find out. I’ll try to translate.
It’s claimed that thorium “eats its own waste.” In fact, thorium is very promising as a future source of energy, but this is nonsense. Apparently it’s based on the fact that certain types of thorium reactors actually could burn their own fuel material, as well as plutonium scavenged from conventional reactor waste and other transuranics, much more completely than alternative designs. This is certainly an advantage, but the fission products (lighter elements left over from the splitting of uranium and plutonium) would still be highly radioactive, and would certainly qualify as waste. Such claims are so obviously spurious that they play into the hands of opponents of nuclear power.
It is also claimed that “all (thorium) is potentially usable as fuel, compared to just 0.7% for uranium.” In fact, thorium is not a fissile material, meaning that, unlike uranium 235 (U235), which is the 0.7% of natural uranium the author is referring to, it cannot sustain a nuclear chain reaction on its own. It must first be converted to a lighter isotope of uranium, U233, which is fissile. In fact, the U238 that makes up most of the rest of the leftover 99.3% percent of natural uranium is “potentially usable as fuel” in that sense as well, by conversion to plutonium 239, also a fissile material.
The author is vague about exactly what kind of reactors he is referring to, lumping Dr. Carlo Rubbia’s subcritical design, which depends on a proton accelerator to provide enough neutrons to keep the fission process going, and molten fluoride salt reactors, which do not necessarily require such an accelerator. He claims that, “Thorium-fluoride reactors can operate at atmospheric temperature,” which they certainly could not if the goal were to generate electric power. I suspect that what he means here is that, unlike plutonium breeders, which require a high energy neutron spectrum to produce more fuel than they consume, thorium breeders could potentially use “thermal” neutrons that have been slowed to the point that their average energy, when converted to a “temperature,” would be much closer to that of the other material in the reactor core.
In any case, the design he seems to be so excited about is Dr. Rubbia’s “energy amplifier,” which, as noted above, would be subcritical, requiring a powerful, high current proton accelerator to keep the fission process going. It would do this via spallation, a process in which a copious source of the neutrons required to keep the reaction going would be provided via interaction of the protons with heavy nuclei such as lead, or thorium itself. This is the process used to produce neutrons at the Oak Ridge Spallation Neutron Source. Such reactors could easily be “turned off” by simply shutting down the source of neutrons. However, the idea that they would be inherently “safer” is dangerously inaccurate. In fact, they would be an ideal path to covert acquisition of nuclear weapons. Thorium reactors work by transmuting thorium into U233, which is the isotope that fissions to produce the lion’s share of the energy. It is also an isotope that, like U235 and Pu239, can be used to make nuclear bombs.
The article downplays this risk as follows:
After the Manhattan Project, US physicists in the late 1940s were tempted by thorium for use in civil reactors. It has a higher neutron yield per neutron absorbed. It does not require isotope separation, a big cost saving. But by then America needed the plutonium residue from uranium to build bombs.
“They were really going after the weapons,” said Professor Egil Lillestol, a world authority on the thorium fuel-cycle at CERN. “It is almost impossible make nuclear weapons out of thorium because it is too difficult to handle. It wouldn’t be worth trying.” It emits too many high (energy) gamma rays.
What Lillestol is referring to is the fact that, in addition to U233, thorium reactors also produce a certain amount of U232, a highly radioactive isotope of uranium with a half life of 68.9 years whose decay does, indeed, release potentially deadly gamma rays. It would be extremely difficult, if not impossible, to remove it from the U233, and, if enough of it were present, it would certainly complicate the task of building a bomb. The key phrase here is “if enough of it were present.” Thorium enthusiasts like Lillestol never seem to do the math. In fact, as can be seen here, even conventional thorium breeders could be designed to produce U233 sufficiently free of U232 to allow workers to fabricate a weapon without serious danger of receiving a lethal dose of gamma rays. However, large concentrations of highly radioactive fission products would make it very difficult to surreptitiously extract the uranium, and it would also be possible to mix the fuel material with natural or depleted uranium, reducing the isotopic concentration of U233 below that necessary to make a bomb.
With subcritical reactors of the type proposed by Rubbia, the problem of making a bomb gets a whole lot easier. Rogue state actors, and even terrorists groups if we “succeed” in coming up with a sufficiently inexpensive design for high energy proton accelerators, could easily modify them to produce virtually pure U233, operating small facilities that it would be next to impossible for international monitors to detect. There are two possible pathways for the production of U232 from thorium, both of which involve a reaction in which a neutron knocks two neutrons out of a heavy nucleus of Th232 or U233. Those reactions can’t occur unless the initial neutron is carrying a lot of energy as can be seen in figure 8 of the article linked above, the threshold is around 6 million electron volts (MeV). That means that, in order to produce virtually pure U233, all that’s necessary is to slow the incoming spallation neutrons below that energy. That’s easily done. Imagine two billiard balls on a table. If you hit one as hard as you can at the other one, what happens when they collide? If your aim was true, the first ball stops, transferring all its energy to the second one. The same thing can be done with neutrons. Pass the source neutrons through a layer of material full of light atoms such as paraffin or heavy water, and they will bounce off the light nuclei, losing energy in the process, until they eventually become “thermalized,” with virtually none of them having energies above 6 MeV. If such low energy neutrons were then passed on to a subcritical core, they would produce U233 with almost no U232 contamination.
It gets worse. Unlike Pu239, U233 does not emit a lot of spontaneous neutrons. That means it can be used to make a simple gun-type nuclear weapon with little fear that a stray neutron will cause it to fizzle before optimum criticality is reached. And, by the way, a lot less of it would be needed than would be required for a similar weapon using U235, the fissile material in the bomb that destroyed Hiroshima.
We’re quite capable of blowing ourselves up without Rubbia’s subcritical reactors. Let’s not make it any easier than it already is. Thorium reactors have many potential advantages over other potential sources of energy, including wind and solar. However, if we’re going to do thorium, let’s do it right.
UPDATE: Steven Den Beste gets it right at Hot Air. His commenters throw out the usual red herrings about the US choosing U235 and Pu239 over U233 in the Manhattan Project (for good reasons that had nothing to do with U233’s suitability as a bomb material) and the grossly exaggerated and misunderstood problem with U232. You don’t have to be a nuclear engineer to see through these fallacious arguments. The relevant information is all out there on the web, it’s not classified, and it can be understood by any bright high school student who takes the time to get the facts.
Posted on August 24th, 2010 No comments
It appears that authorities in Moldova seized about four pounds of contraband uranium and arrested several suspects. The material in question turned out to be the isotope uranium 238 (U238), meaning that, unlike the fissile isotope U235, it couldn’t be used to make a bomb. Maybe it’s just me, but it seems that whenever I have personal knowledge of what happened in an incident that makes the news, or expertise regarding its subject, the mainstream media, with their layers of editors and fact checkers, manage to botch the story. For example, CNN uncritically quotes Kirill Motspan, a spokesman for Moldova’s Interior Ministry as saying that, “…it was his understanding that 1 kilo of uranium costs $6.3 million on the black market and that is what the smugglers were expecting to get.” I seriously doubt that Motspan meant just any uranium, and especially not U238. If that were the case, the guys who fly A10 Warthog ground support planes armed with Gatling guns that pump out rounds that contain just under a pound each of the stuff at 4,200 rounds per minute must be using caddies to recover them. He was probably referring to uranium highly enriched in isotope 235, which can be used to make a bomb. In other words, the smugglers were intending to snooker their customers. Anyone can Google the fact that natural uranium, which contains at least a little (about 0.71%) U235, is currently selling for just under $50 per pound.
Not to be outdone, the Telegraph reports that the material seized was “enriched uranium.” Since the caption of the figure that appears in the article notes that the material was U238, commonly referred to as depleted uranium, none of their “fact checkers” apparently has a clue what they’re talking about.
BTW, have you noticed that whenever contraband radioactive and special nuclear material is seized, its usually due to good old fashioned police work, and not to those snazzy new radiation detectors that are being installed hand over fist at ports and border crossings? That’s not a coincidence.