More Thorium Silliness

Thorium is a promising candidate as a future source of energy.  I just wonder what it is about the stuff that inspires so many people to write nonsense about it.  It doesn’t take a Ph.D. in physics to spot the mistakes.  Most of them should be obvious to anyone who’s taken the trouble to read a high school science book.  Another piece of misinformation has just turned up at the website of Popular Mechanics, dubiously titled The Truth about Thorium and Nuclear Power.

The byline claims that, “Thorium has nearly 200 times the energy content of uranium,” a statement I will assume reflects the ignorance of the writer rather than any outright attempt to deceive. She cites physicist Carlo Rubbia as the source, but if he ever said anything of the sort, he was making some very “special” assumptions about the energy conversion process that she didn’t quite understand. I assume it must have had something to do with his insanely dangerous subcritical reactor scheme, in which case the necessary assumptions to get a factor of 200 would have necessarily been very “special” indeed. Thorium cannot sustain the nuclear chain reaction needed to produce energy on its own. It must first be transmuted to an isotope of uranium with the atomic weight of 233 (U233) by absorbing a neutron. Strictly speaking, then, the above statement is nonsense, because the “energy content” of thorium actually comes from a form of uranium, U233, which can sustain a chain reaction on its own. However, let’s be charitable and compare natural thorium and natural uranium as both come out of the ground when mined. 

As I’ve already pointed out, thorium cannot be directly used in a nuclear reactor on its own.  Natural uranium actually can.  It consists mostly of an isotope of uranium with an atomic weight of 238 (U238), but also a bit over 0.7% of a lighter isotope with an atomic weight of 235 (U235).  U238, like thorium, is unable to support a nuclear chain reaction on its own, but U235, like U233, can.  Technically speaking, what that means is that, when the nucleus of an atom of U233 or U235 absorbs a neutron, enough energy is released to cause the nucleus to split, or fission.  When U238 or natural thorium (Th232) absorbs a neutron, energy is also released, but not enough to cause fission.  Instead, they become U239 and Th233, which eventually decay to produce U233 and plutonium 239 (Pu239) respectively. 

Let’s try to compare apples and apples, and assume that enough neutrons are around to convert all the Th232 to U233, and all the U238 to Pu239.  In that case we are left with a lump of pure U233 derived from the natural thorium and a mixture of about 99.3% Pu239 and 0.7% U235 from the natural uranium.  In the first case, the fission of each atom of U233 will release, on average, 200.1 million electron volts (MeV) of energy that can potentially be converted to heat in a nuclear reactor.  In the second, each atom of U235 will release, on average, 202.5 Mev, and each atom of Pu239 211.5 Mev of energy.  In other words, the potential energy release from natural thorium is actually about equal to that of natural uranium. 

Unfortunately, the “factor of 200” isn’t the only glaring mistake in the paper.  The author repeats the familiar yarn about how uranium was chosen over thorium for power production because it produced plutonium needed for nuclear weapons as a byproduct.  In fact, uranium would have been the obvious choice even if weapons production had not been a factor.  As pointed out earlier, natural uranium can sustain a chain reaction in a reactor on its own, and thorium can’t.  Natural uranium can be enriched in U235 to make more efficient and smaller reactors.  Thorium can’t be “enriched” in that way at all.  Thorium breeders produce U232, a highly radioactive and dangerous isotope, which can’t be conveniently separated from U233, complicating the thorium fuel cycle.  Finally, the plutonium that comes out of nuclear reactors designed for power production, known as “reactor grade” plutonium, contains significant quantities of heavier isotopes of plutonium in addition to Pu239, making it unsuitable for weapons production.

Apparently the author gleaned some further disinformation for  Seth Grae, CEO of Lightbridge, a Virginia-based company promoting thorium power.  He supposedly told her that U233 produced in thorium breeders “fissions almost instantaneously.”  In fact, the probability that it will fission is entirely comparable to that of U235 or Pu239, and it will not fission any more “instantaneously” than other isotopes.  Why Grae felt compelled to feed her this fable is beyond me, as “instantaneous” fission isn’t necessary to prevent diversion of U233 as a weapons material.  Unlike plutonium, it can be “denatured” by mixing it with U238, from which it cannot be chemically separated.

It’s a mystery to me why so much nonsense is persistently associated with discussions of thorium, a potential source of energy that has a lot going for it.  It has several very significant advantages over the alternative uranium/plutonium breeder technology, such as not producing significant quantities of plutonium and other heavy actinides, less danger that materials produced in the fuel cycle will be diverted for weapons purposes if the technology is done right, and the ability to operate in a more easily controlled “thermal” neutron environment.  I can only suggest that people who write popular science articles about nuclear energy take the time to educate themselves about the subject.  Tried and true old textbooks like Introduction to Nuclear Engineering and Introduction to Nuclear Reactor Theory by John Lamarsh have been around for years, don’t require an advanced math background, and should be readable by any intelligent person with a high school education.

“Heatballs”: German Technology Triumphs Again

According to Reuters (hattip Tim Blair), German scientists have discovered a new home heating technology that leverages the tendency of charged particles (in this case electrons) to transfer energy to a metal lattice when under the influence of an electromotive force. Although remarkably similar to old-fashioned incandescent bulbs, which were recently banned in the European Union, the devices can be easily distinguished therefrom by virtue of the fact that they are clearly marked “Heatball.”

According to the website set up to market the new devices, they are the,

Best invention since the lightbulb! …A heatball is an electrical resistance intended for heating. Heatball is action art! Heatball is resistance against regulations that are imposed without recourse to any democratic or parliamentary procedure, disenfranchising citizens.

Noting that a portion of the purchase price of each of the devices will be contributed to a fund to save the rainforests, the blurb continues,

Heatball is also a form of resistance against the senseless nature of measures to protect the environment. How is it possible to seriously believe that we can save the world’s climate by using energy efficient lightbulbs, while at the same time condoning the fact that the rainforests have been waiting in vain for their salvation for decades?

Making light of the absurd notion that the devices could be misused to produce light, the site adds,

In accordance with the instructions, the correct use of heatballs is to produce warmth. Would you use a toaster as a reading lamp? …The emission of light during the heating process is a result of the production technology. It is no reason for alarm, nor does it constitute legitimate grounds for a refund.

In the 20th century we found ways to beat Prohibition in the USA.  May our German friends have similar success with their Heatballs in the 21st.

Energy Update: Nuclear Falters, Coal Advances

Something over a year ago, the US government announced that four companies out of 17 that had applied for over a hundred billion dollars worth of federal loan guarantees for 21 proposed nuclear reactors had made what the Wall Street Journal called its “short list.”  At the time, Carl from Chicago, who occasionally writes for ChicagoBoyz, penned an article expressing his “confusion” at the choices.  Several seemingly logical candidates had been passed over, and, of the four picked, three were underfunded and had an assortment of legal and financial issues that made them dubious choices for coming up with the kind of capital needed to fund new construction.  As it turns out, the feds should have listened to Carl.  NRG, one of the two companies he picked as “least likely to succeed,” effectively dropped out of the game some time ago.  Now, as he puts it, “the other shoe has dropped.”  The other weak sister, Constellation Energy Group, just announced it is pulling out of negotiations to build the build the Calvert Cliffs 3 reactor in Maryland.

Rod Adams at Atomic Insights also commented on Constellation’s decision to walk.  Citing a related article in the Washington Post according to which,

Separately, administration officials said they had approved a $1.06 billion loan guarantee for an Oregon wind farm, the world’s largest, after project developers waged a vigorous lobbying campaign to bring the year-long application process to a conclusion.

Rod notes the gross disparity in the terms and conditions of loans offered to the two industries:

Just in case anyone wonders why the wind farm project accepted its loan guarantee while Constellation refused, the key is in understanding the terms and conditions.

For a project that would have produced 4,000 jobs for 4-5 years in Maryland, the companies involved were being told that they had to PAY the US government a non refundable fee of $880 MILLION dollars in order to BORROW $7.5 billion for a project where they would have to invest at least 20% of the project cost as their own equity, thus giving them at least $2.0 billion in reasons to make sure the project succeeded.

In contrast, the wind farm, which will produce 400 jobs for a relatively short period during construction, was able to obtain a $1.06 billion dollar loan with NO CREDIT SUBSIDY COST at all. The ARRA has provided all of the money required for the credit subsidy cost for politically defined “renewable” energy via a change in section 1705 of the Energy Policy Act. In addition, section 1603 of the ARRA provides a CASH GRANT in lieu of a production tax credit of 30% of the cost of the project via a check within 6 months after the project closes. The wind project thus gets a $1.06 billion loan with no closing cost and the sponsors have no equity in the project at all since they get their 20% down payment back with a 50% kicker less than a year after the project starts.

In a word, hype about a “nuclear renaissance” can be taken with a grain of salt, at least until the government gets its act together.  Meanwhile, the coal industry has reason to cheer.  New coal gasification plants are being built in the US even as we speak.  Among other things, they produce hydrogen, a long shot candidate as a non-polluting vehicle fuel to replace petroleum.  Ideas for getting the stuff out of coal without releasing tons of CO2 in the process remain sketchy.  Even more intriguingly, a firm is seriously looking into the possibility of building a coal liquefaction plant in Indiana.  Whether they decide the new plant is financially feasible or not, the fact that such a project has made it this far along in the planning process demonstrates how close coal has come to becoming a viable replacement for petroleum.  Given that the United States has over a quarter of the proved coal reserves in the entire world, and that those reserves are more than twice the size in terms of energy as the world’s remaining oil, that is a fact of no small significance.

Mineral Wealth in Afghanistan: The Saudi Arabia of Lithium?

The Grey Lady seemed positively ecstatic about recent discoveries of mineral wealth in Afghanistan in an article that appeared yesterday. The finds include iron, copper, gold, and a host of other valuable materials valued at a cool $1 trillion. The most significant of them all may turn out to be lithium. Initial analysis indicates deposits at only one location as large as those of Bolivia, the country that now has the world’s largest known reserves.

Lithium has become increasingly important lately as a component of small but powerful batteries. It will become a lot more important if fusion energy ever becomes a reality. I don’t expect this to happen anytime soon. Even if the remaining scientific hurdles can be overcome, the engineering difficulties of maintaining the extreme conditions necessary for fusion reliably over the long periods necessary to extract useful electric power would be daunting. Fusion power would likely be too expensive to compete with alternative energy sources under the best of circumstances. However, that’s my opinion, and a good number of very intelligent scientists disagree with me. If they’re right, and the upcoming proof of principle experiments at the National Ignition Facility prove far more successful than I expect, or some scientific breakthrough enables us to tame fusion on much smaller and less costly machines, fusion power may yet become a reality.

In that case, lithium may play a far more substantial role in energy production than it ever could as a component of batteries. It could literally become the metallic “oil” of the future. The reason for that is the fact that the easiest fusion reaction to tame is that between two heavy isotopes of hydrogen; namely, deuterium and tritium.  The “cross section” for the fusion reaction between these two isotopes, meaning the probability that it will occur under given conditions, becomes significant at substantially lower temperatures and pressures than competing candidates.  The fly in the ointment is the availability of fuel material.  Deuterium is abundant in nature.  Tritium, however, is not.  It must be produced artificially.  The raw material is lithium.

It happens that the fusion reaction between deuterium and tritium results in the production of a helium nucleus and a very energetic neutron.  This neutron can cause reactions in either of the two most common naturally occurring isotopes of lithium, Li-6 and Li-7, that produce tritium.  Thus, the fusion reactions that may one day produce energy for electric power could also be leveraged to breed tritium if the reaction were made to take place in the vicinity of lithium, either in a surrounding blanket or one of several other more fanciful proposed arrangements.   

As noted above, I don’t think that day is coming anytime soon.  If it when it does, Afghanistan may well become the Saudi Arabia of a new technological era of energy production.

Whither Nuclear Power?

Carl at Chicago Boyz has some interesting insights on the future prospects for nuclear power.  According to his latest,

While there has been talk of a nuclear “renaissance” in the media for years, it is mostly hype. Existing nuclear plants in the US are running at a high capacity factor and making money for their owners, but there has been little tangible investment in new nuclear plants in the US.

One giant barrier to building new nuclear plants in the US is financing. We haven’t built a new nuclear plant in the US in decades so no one really knows what it will cost (and it depends on which design is chosen) but it is safe to assume that they will cost more than $8-10B each. Given that the entire market capitalization of most US electric utilities is smaller than this figure, as I discussed in this post in June of 2009, the idea that new nuclear plants would be built in large numbers was a pipe dream.

Read the whole article and some of the outstanding comments as well.  For example, one of the nuclear engineers working on the new starts in Texas writes,

First, let’s understand the nature of the loan guarantees. I’m a nuclear engineer who has been involved with the South Texas Project’s new reactor plans since near the beginning.

The loan guarantees do not guarantee against technical risk. They only cover subsequent GOVERNMENT actions. In the last batch, investors lost billions due to capricous government actions either to delay or prevent startup. Once the NRC issues a “combined operating license” (COL) per 10CFR52, the guarantee is to kick in so that no county government or state agency (or feds) can block construction and completion. When a number is given on the amount of loan guarantees, that is NOT the money that has to be spent. It is merely the exposure of default. Each applicant for a guarantee has to pay an upfront fee like an insurance premium to the government based on the expected risk of default. Basically, the federal government is acting as an insurance company, collecting premiums and covering specific risks.

THAT’S ALL WE NEED! Get government and politics out of the way and we can build and run new nuclear power plants in the country.

As you will see if you read the article, Carl is extremely pessimistic about the possibility of a nuclear “renaissance.”  Unless we can find a rational way to deal with lawyers, NIMBYs, and multiple layers of redundant government regulation, he’s probably right.  He summarizes the countries energy picture as follows:

– new drilling technologies are making natural gas in the US cheaper, which makes other types of investment (nuclear, coal) less financially feasible
– while many companies were potential investors in new nuclear plants, only one (Southern Company) was really feasible, and they seem to be first out of the gate (woe to their shareholders, however)
– NRG jumped out first with their Texas plant but it is looking like they are going to pull the plug on that under-capitalized effort
– the Federal government is continuing to be completely inept in their activities 1) unable to disburse stimulus funds, as predicted 2) no plan for waste after abandoning Yucca Mountain 3) can’t figure out what to do about “clean coal” projects after spending over $1B in Illinois and 7 years to boot
– not covered here is cap and trade, which needs its own post to do it justice. It looks like the recent change in the senate will stop this in its tracks, but legal efforts to stop the EPA from implementing new draconian rules continues

As Carl says, the key problem when it comes to nuclear startups is the “giant barrier” of cost.  It will be interesting to see how this plays out, but a suggestion by one of the other commenters seems to make sense:

One way of solving the quick problem is to use smaller units manufactured offsite. E.G. Babcox and Wilcox, proposes self contained reactors producing 100 — 250 MWe. The site would be prepared, the reactor could then manufactured in a factory and brought in by train or barge. Once at the site the reactor could be hooked up to the system and started up quickly.

There’s an excellent article on small nuclear reactors at the World Nuclear Association website.  Carl plans to take a closer look at the cost issue in a later post, but, if new conventional plants really cost “more than $8 to $10 billion each,” small reactors look very competitive.  After all, a complete Virginia class nuclear submarine only costs $1.8B.  Why not just build a whole fleet of dummy nuclear submarines, float them out beyond the territorial limit, and hook them up to the grid with extension cords?  It would knock out the lawyers and the NIMBYs at one blow!

Crunch Time for the National Ignition Facility

ICFThe news from California is encouraging.  In an article recently published in Science and summarized on the website of Lawrence Livermore National Laboratory (LLNL), scientists working at the National Ignition Facility (NIF) report efficient coupling of energy from all 192 beams of the giant facility into a hohlraum target similar to the one that will be used later this year in the first attempts to achieve fusion ignition and “breakeven,” usually defined as more energy production from fusion than was carried in the laser beams used to hit the target.  The design energy of the NIF is 1.8 megajoules, and, according to the latest reports from Livermore, the threshold of one megajoule has already been achieved. 

In inertial confinement fusion, or ICF, the target, a thin, spherical shell containing a mixture of deuterium and tritium, two heavy isotopes of hydrogen, is first compressed and imploded to very high densities.  A series of converging shocks then create a “hot spot” in the center of the compressed material, setting off fusion reactions which release enough energy to set off a “burn wave.”  This wave propagates out through the remaining fuel material, heating it to fusion energies as well.  The process is known as inertial confinement fusion because it takes place so fast (on the order of a nanosecond) that the material’s own inertia holds it in place long enough for the fusion reactions to occur.  There are two basic approaches; direct drive, in which the laser beams hit the fusion target directly, and indirect drive, the process that will be used in the upcoming Livermore ignition experiments, in which the beams are shot into a hollow can or “hohlraum,” producing x-rays when they hit the inner walls.  These x-rays then implode and ignite the target.  

A potential problem that must be overcome in ICF is known as laser plasma interactions (LPI).  These are parasitic interactions which can soak up laser energy and quench the fusion process.  According to the Livermore paper, special grids at the hohlraum entrance holes were used in the latest experiments, allowing the use of LPI to “tweak” the incoming beams, steering them to just the right spots.  This recent (and elegant) innovation allows the exploitation of a process that has always been considered a major headache in the past to actually improve the chances of achieving igntion.

The BBC and Spiegel both have articles about the latest experiments today, conflating the energy and military applications of the NIF as usual.  According to the Spiegel article, for example, it will be necessary for the lasers in a fusion reactor to hit the target ten times a second, whereas hours are necessary between shots at the NIF.  The reason, of course, is that the NIF was never designed as an energy project, but is being funded by the National Nuclear Security Administration (NNSA) to conduct nuclear weapons experiments.  If ignition is achieved, the prospects for fusion energy will certainly be improved, but the prospects aren’t nearly as bright as the press releases from LLNL would imply.  It will still be necessary to overcome a great number of scientific and engineering hurdles before the process can ever become useful and economical as a source of energy.

I am not optimistic about the success of the upcoming experiments.  I suspect it will be too difficult to achieve the fine beam energy balance and symmetry that will be necessary to ignite the central “hot spot.”  It will take more than one converging shock to do the job.  Several will be necessary, moving inward through the target material at just the right speed to converge at a small spot at the center.  If they really pull it off, I will be surprised, but will be more than happy to eat crow.  A lot of very talented scientists have dedicated their careers to the quest for fusion, and I’m keeping my fingers crossed for them. 

Even if these ignition experiments fail, it won’t mean the end for fusion by a long shot.  We know we can achieve the high fuel densities needed for inertial fusion, and there are other ways of creating the “hot spot” needed to achieve ignition, such as “fast ignitor.”  Other approaches to fusion keep showing up in the scientific literature, and I can’t help but think that, eventually, one of them will succeed.

Biocentrism and Other Quantum Mechanical Artifacts

Given the massive scientific, technological and philosophical significance of the great discoveries in the field of quantum mechanics since Max Planck saved us from the Ultraviolet Catastrophe, it’s odd how little of that knowledge has percolated down through even the more educated and well-informed strata of society. Occasionally you might run across someone who’s heard about the quantized energies, quantum states, and quantum numbers that Planck postulated more than a century ago. However, the stunning theories about the wave nature of matter developed by the likes of de Broglie, Schrödinger, Pauli, Heisenberg, and many of the other giants of 20th century physics are usually terra incognita for anyone other than physical scientists. It’s a shame, because the implications of what they revealed to us are profound. Among other things, the purely deterministic universe of classical physics is no more. It is no longer quite so “obvious” that, as so eloquently put by Edward Fitzgerald in his translation of the Rubaiyat,

With earth’s first clay, they did the last man’s knead,
And then of the last harvest sowed the seed,
Yea the first morning of creation wrote,
What the last dawn of reckoning shall read

We have discovered that the reality of the universe does not exactly correspond to the picture our senses present to us, and we are still far from knowing what all this stuff around us really is, and why it exists to begin with. It is a strange reality of fields, wave functions and space and time whose measurements depend on who is doing the measuring. It’s too bad most of us are so unaware of all these developments. There are many good books out there, including some that should be easily comprehensible to an intelligent undergraduate and even high school student, that could clear up a lot of the mystery.  It would be well if our schools devoted more time to teaching some of this material. 

Meanwhile, all sorts of fanciful notions are floating about to charm the unwary and impose on the gullible.  Among these is the idea of biocentrism, according to which the universe has no independent existence, but is created by life, or, more specifically, consciousness, and could not exist without it.  The modern incarnation of this Berkelian universe was recently set forth by Robert Lanza and Bob Berman in a book entitled, “Biocentrism:  How Life and Consciousness Are the Keys to Understanding the True Nature of the Universe.”

A review of the book appears on the website of Discover Magazine with the byline, “Stem-cell guru Robert Lanza presents a radical new view of the universe and everything in it.”  Terms like “radical” and “new” are a bit of a stretch.  Berkelian ideas supposedly informed by quantum discoveries have been around since at least the days when Schrödinger came up with his famous parable of the cat.  We can forgive the authors for a bit of hype though, as it is unlikely that something more realistic, like “hackneyed old view,”  would have encouraged sales of their book.  In any case, according to Lanza,

For centuries, scientists regarded Berkeley’s argument as a philosophical sideshow and continued to build physical models based on the assumption of a separate universe “out there” into which we have each individually arrived. These models presume the existence of one essential reality that prevails with us or without us. Yet since the 1920s, quantum physics experiments have routinely shown the opposite: Results do depend on whether anyone is observing. This is perhaps most vividly illustrated by the famous two-slit experiment. When someone watches a subatomic particle or a bit of light pass through the slits, the particle behaves like a bullet, passing through one hole or the other. But if no one observes the particle, it exhibits the behavior of a wave that can inhabit all possibilities—including somehow passing through both holes at the same time.

Some of the greatest physicists have described these results as so confounding they are impossible to comprehend fully, beyond the reach of metaphor, visualization, and language itself. But there is another interpretation that makes them sensible. Instead of assuming a reality that predates life and even creates it, we propose a biocentric picture of reality. From this point of view, life—particularly consciousness—creates the universe, and the universe could not exist without us.

Here it is hard to avoid the conclusion that Lanza is deliberately imposing on the reader’s credulity.  The only other conclusion is that he simply doesn’t know what he’s talking about.  The results of the “famous two slit experiment” have been well understood since at least the time that Heisenberg proposed his famous Uncertainty Principle.  It is well known that a measuring device capable of detecting a particle at either of the two slits could not measure its passage without interacting with it, and that if it had sufficient spatial resolution to determine which slit it passed through, it would necessary disturb the particle’s momentum so much that the double-slit interference pattern would be destroyed.  If any “great physicists” are still “confounded” by these results, I would like to know who they are.  How a biocentric view of the universe somehow explains this imaginary paradox is beyond me.  Continuing with Lanza:

In 1997 University of Geneva physicist Nicolas Gisin sent two entangled photons zooming along optical fibers until they were seven miles apart. One photon then hit a two-way mirror where it had a choice: either bounce off or go through. Detectors recorded what it randomly did. But whatever action it took, its entangled twin always performed the complementary action. The communication between the two happened at least 10,000 times faster than the speed of light. It seems that quantum news travels instantaneously, limited by no external constraints—not even the speed of light. Since then, other researchers have duplicated and refined Gisin’s work. Today no one questions the immediate nature of this connectedness between bits of light or matter, or even entire clusters of atoms.

Before these experiments most physicists believed in an objective, independent universe. They still clung to the assumption that physical states exist in some absolute sense before they are measured.

All of this is now gone for keeps.

In the first place, the belief in an objective, independent universe is not the same thing as the assumption that physical states exist in some absolute sense before they are measured.  In the second, “All this” is not gone for keeps in either case.  Such comments have nothing in common with scientific hypotheses.  Rather, they are ideological statements of faith.  Lanza continues with a discussion of the so-called Goldilocks principle:

The strangeness of quantum reality is far from the only argument against the old model of reality. There is also the matter of the fine-tuning of the cosmos. Many fundamental traits, forces, and physical constants—like the charge of the electron or the strength of gravity—make it appear as if everything about the physical state of the universe were tailor-made for life. Some researchers call this revelation the Goldilocks principle, because the cosmos is not “too this” or “too that” but rather “just right for life.”

At the moment there are only four explanations for this mystery. The first two give us little to work with from a scientific perspective. One is simply to argue for incredible coincidence. Another is to say, “God did it,” which explains nothing even if it is true.

The third explanation invokes a concept called the anthropic principle, first articulated by Cambridge astrophysicist Brandon Carter in 1973. This principle holds that we must find the right conditions for life in our universe, because if such life did not exist, we would not be here to find those conditions. Some cosmologists have tried to wed the anthropic principle with the recent theories that suggest our universe is just one of a vast multitude of universes, each with its own physical laws. Through sheer numbers, then, it would not be surprising that one of these universes would have the right qualities for life. But so far there is no direct evidence whatsoever for other universes.

The final option is biocentrism, which holds that the universe is created by life and not the other way around.

Why biocentrism, which explains none of the observed phenomena mentioned in the article, must be considered the “final option” is beyond me.  Allow me to suggest a fifth option:  Our knowledge of the physical universe is imperfect, and, as yet, we lack the physical insight to explain everything we observe or to grasp the physical essence of a universe of which our senses give us but a clouded perception.  While I am not quite as convinced as Einstein that “God does not play dice with the universe,” it seems to me that the words of de Broglie, a great physicist who first proposed the theory of matter waves, are well worth heeding:

We can reasonably accept that the attitude adopted for nearly 30 years by theoretical quantum physicists is, at least in appearance, the exact counterpart of information which experiment has given us of the atomic world. At the level now reached by research in microphysics, it is certain that methods of measurement do not allow us to determine simultaneously all the magnitudes which would be necessary to obtain a picture of the classical type of corpuscles (this can be deduced from Heisenberg’s uncertainty principle), and that the perturbations introduced by the measurement, which are impossible to eliminate, prevent us in general from predicting precisely the result which it will produce and allow only statistical predictions. The construction of purely probabilistic formulae that all theoreticians use today was thus completely justified. However, the majority of them, often under the influence of preconceived ideas derived from positivist doctrine, have thought that they could go further and assert that the uncertain and incomplete character of the knowledge that experiment at its present stage gives us about what really happens in microphysics is the result of a real indeterminacy of the physical states and of their evolution. Such an extrapolation does not appear in any way to be justified. It is possible that looking into the future to a deeper level of physical reality we will be able to interpret the laws of probability and quantum physics as being the statistical results of the development of completely determined values of variables which are at present hidden from us. It may be that the powerful means we are beginning to use to break up the structure of the nucleus and to make new particles appear will give us one day a direct knowledge which we do not now have at this deeper level. To try to stop all attempts to pass beyond the present viewpoint of quantum physics could be very dangerous for the progress of science and would furthermore be contrary to the lessons we may learn from the history of science. This teaches us, in effect, that the actual state of our knowledge is always provisional and that there must be, beyond what is actually known, immense new regions to discover.

Well said by a great physicist and a great thinker, who, in spite of his fame, still had the humility to present his ideas as hypotheses instead of dogmas set forth imperiously as “the final option.”

Climategate and Scientific Credibility

I think this article at by Cathy Young about the global warming debate is spot on (hattip Instapundit). Her conclusions:

There is no doubt that refusal to accept human-made climate change is often self-serving. But the other side has blinders and selfish motives of its own. “Going green” has turned into a vast industry in its own right—as well as a religion with its own brand of zealotry. For many, global warming is the secular equivalent of a biblical disaster sent by God to punish humankind for its errant (capitalist) ways. Those who embrace environmentalism as a faith have no interest in scientific and technological solutions to climate change—such as nuclear power—that do not include imposing drastic regulations on markets and curbs on consumption.

In theory, science should be above such motives. Yet, at the very least, the scientists who back strong measures against global warming have not objected to the alarmism, the political fanaticism, or the pseudo-spiritual drivel promoted by many of the crusaders in this cause.

Public trust is something scientists must work hard to maintain. When it comes to science and public policy, the average citizen usually has to trust scientists—whose word he or she has to take on faith almost as much as a religious believer takes the word of a priest. Once that trust is undermined, as it has been in recent years, science becomes a casualty of politics.

It was obvious to me that environmental scientists had a major credibility problem when I read Byorn Lomborg’s “Skeptical Environmentalist.” This impression was greatly stengthened when a gang of scientific hacks set up a kangaroo court known as the Danish Committees on Scientific Dishonesty, and “convicted” Lomborg of “scientific dishonesty,” noting, however, with supreme condescension that Lomborg was “not guilty” because of his “lack of expertise” in the fields in question. How this arrogant, scientific pond scum could have come to such a conclusion when they were unable to cite a single substantial example of factual error in Lomborg’s book is beyond me. Their abject betrayal of science spoke for itself. Needless to say, the credibility of environmental scientists has not improved in the interim, as Young notes in her article.

This is unfortunate, as it seems to me that the evidence is strong that we may be facing a serious problem with artificially induced global warming. However, because, as Young points out, “…the scientists who back strong measures against global warming have not objected to the alarmism, the political fanaticism, or the pseudo-spiritual drivel promoted by many of the crusaders in this cause,” the issue has become politicized to such an extent that the chances that we will be able to do anything more effective than ideological grandstanding to address the problem are almost nil. As usual, the politicians, who rejoice whenever a crisis comes along for them to “save” us from, will promote any number of very expensive but useless nostrums that present us with the pleasant illusion that we are doing something about the problem, perhaps reducing greenhouse emissions by some insignificant fraction in the process, but accomplishing nothing in the way of really solving the problem. In the meantime, the rest of us must keep our fingers crossed that some fortuitous technological advance will allow us to dodge the bullet, perhaps in the form of the discovery of a way to tame fusion or a transformational improvement in the efficiency of solar collectors. For those of us who possess the means, it is, perhaps, not too soon to begin looking for attractive tracts of land in Alaska, preferably on high ground.

Germany to Reverse Course on Atomic Energy?

As a result of their dismal showing in the elections to the Bundestag on September 27, Germany’s left of center Social Democrats (SPD) have been replaced in the former “grand coalition” government with the more conservative Christian Democrats (CDU) by the market oriented Free Democratic Party (FDP). One salutary result has been an apparent reversal of course on the irrational but ideologically fashionable decision to shut down Germany’s nuclear generating capacity. According to Der Spiegel,

The Union (CDU) and FDP will accommodate the nuclear industry – but under stern conditions. The operational lifetime of German nuclear power plants can be extended on condition that high safety standards are met. According to a paper by the new coalition’s working group on the environment made available to Spiegel Online, “Nuclear energy will be necessary as a transitional and bridge technology until climate friendly and more economical alternative means of producing sufficient electricity are available capable of meeting baseload electric generation requirements. Therefore, the operational lifetime of German nuclear power plants will be extended to 32 years.

Human Enhancement and Morality: Another Day in the Asylum

The Next Big Future site links to a report released by a bevy of professors that, we are told, is to serve “…as a convenient and accessible starting point for both public and classroom discussions, such as in bioethics seminars.” The report itself may be found here. It contains “25 Questions & Answers,” many of which relate to moral and ethical issues related to human enhancement. For example,

1. What is human enhancement?
2. Is the natural/artificial distinction morally significant in this debate?
3. Is the internal/external distinction morally significant in this debate?
4. Is the therapy/enhancement distinction morally significant in this debate?
9. Could we justify human enhancement technologies by appealing to our right to be free?
10. Could we justify enhancing humans if it harms no one other than perhaps the individual?

You get the idea. Now, search through the report and try to find a few clues about what the authors are talking about when they use the term “morality.” There are precious few. Under question 25 (Will we need to rethink ethics itself?) we read,

To a large extent, our ethics depends on the kinds of creatures that we are. Philosophers traditionally have based ethical theories on assumptions about human nature. With enhancements we may become relevantly different creatures and therefore need to re-think our basic ethical positions.

This is certainly sufficiently coy. There is no mention of the basis we are supposed to use to do the re-thinking. If we look through some of the other articles and reports published by the authors, we find other hints. For example, in “Why We Need Better Ethics for Emerging Technologies” in “Ethics and Information Technology” by Prof. James H. Moor of Dartmouth we find,

… first, we need realistically to take into account that ethics is an ongoing and dynamic enterprise. Second, we can improve ethics by establishing better collaborations among ethicists, scientists, social scientists, and technologists. We need a multi-disciplinary approach (Brey, 2000). The third improvement for ethics would be to develop more sophisticated ethical analyses. Ethical theories themselves are often simplistic and do not give much guidance to particular situations. Often the alternative is to do technological assessment in terms of cost/benefit analysis. This approach too easily invites evaluation in terms of money while ignoring or discounting moral values which are difficult to represent or translate into monetary terms. At the very least, we need to be more proactive and less reactive in doing ethics.

Great! I’m all for proactivity. But if we “do” ethics, what is to be the basis on which we “do” them. If we are to have such a basis, do we not first need to understand the morality on which ethical rules are based? What we have here is another effort by “experts on ethics” who apparently have no clue about the morality that must be the basis for the ethical rules they discuss so wisely if they are to have any legitimacy. If they do have a clue, they are being extremely careful to make sure we are not aware of it. Apparently we are to trust them because, after all, they are recognized “experts.” They don’t want us to peek at the “man behind the curtain.”

This is an excellent example of what E. O. Wilson was referring to when he inveighed against the failure of these “experts” to “put their cards on the table” in his book, “Consilience.” The authors never inform us whether they believe the morality they refer to with such gravity is an object, a thing-in-itself, or, on the contrary, is an evolved, subjective construct, as their vague allusion to a basis in “human nature” would seem to imply. Like so many other similar “experts” in morality and ethics, they are confident that most people will “know what they mean” when they refer to these things and will not press them to explain themselves. After all, they are “experts.” They have the professorial titles and NSF grants to prove it. When it comes to actually explaining what they mean when they refer to morality, to informing us what they think it actually is, and how and why it exists, they become as vague as the Oracle of Delphi.

Read John Stuart Mill’s “Utilitarianism,” and you will quickly see the difference between the poseurs and someone who knows what he’s talking about. Mill was not able to sit on the shoulders of giants like Darwin and the moral theorists who based their ideas on his work, not to mention our modern neuroscientists. Yet, in spite of the fact that these transformational insights came too late to inform his work, he had a clear and focused grasp of his subject. He knew that it was not enough to simply assume others knew what he meant when he spoke of morality. In reading his short essay we learn that he knew the difference between transcendental and subjective morality, that he was aware of and had thought deeply about the theories of those who claimed (long before Darwin) that morality was a manifestation of human nature, and that one could not claim the validity or legitimacy of moral rules without establishing the basis for that legitimacy. In other words, Mill did lay his cards on the table in “Utilitarianism.” Somehow, the essay seems strangely apologetic. Often it seems he is saying, “Well, I know my logic is a bit weak here, but I have done at least as well as the others.” Genius that he was, Mill knew that there was an essential something missing from his moral theories. If he had lived a few decades later, I am confident he would have found it.

Those who would be taken seriously when they discuss morality must first make it quite clear they know what morality is. As those who have read my posts on the topic know, I, too, have laid my cards on the table. I consider morality an evolved human trait, with no absolute legitimacy whatsoever beyond that implied by its evolutionary origin at a time long before the emergence of modern human societies, or any notion of transhumanism or human enhancements. As such, it can have no relevance or connection whatsoever to such topics other than as an emotional response to an issue to which that emotion, an evolved response like all our other emotions, was never “designed” to apply.