Posted on December 1st, 2013 No comments
Morality exits because of evolved behavioral traits. They are its ultimate cause. Without them, morality as we know it, in all of its various complex manifestations would cease to exist. Without them, the subjective perception in the brains of individuals of such things as good, evil, and rights would disappear as well. We perceive all of these as objects, as independent things-in-themselves, because individuals who perceived them in that way were more likely to survive and reproduce. However, they do not exist as things-in-themselves, a fact that has led to endless confusion in creatures such as ourselves, who are capable of reasoning about these nonexistent objects that seem so real.
It follows that, in spite of the legions of philosophers over the centuries who have presumed to enlighten us about the objective “should,” such an entity is as imaginary as unicorns. There is no objective reason why individuals “should” do anything in order to embrace good and reject evil, because good and evil are not objects. The same applies to the State. From a moral point of view (and it can be assumed in what follows that I am speaking of that point of view when I use the term “should”), there is no objective reason why the State should act one way in order to be good, or should not act another way in order to avoid evil. When an individual says that the state should do one thing, and not another, (s)he is simply expressing a personal desire. That, of course, applies to my own point of view. When I speak of what the State should or should not do, I am merely expressing a personal opinion, based on my own conjecture about the kind of state I would like to live in.
In the first place, we can say that there is no essential connection between the modern State and morality, because no such entity as the modern State existed during the time over which the behavioral traits we associate with morality evolved. However, a State that does not take morality into account is unlikely to be effective at achieving the goals its citizens have set for it, because it is the nature of those citizens to be influenced by moral predispositions. If a sufficient number of them perceive that the State is acting immorally, or violating what seem to them to be their rights, they may resist its laws, or rebel.
If the State is to act “morally,” does it follow that there should be an establishment of religion, whether of the spiritual or the secular variety? Based on the empirical evidence of our history, and what I know of human behavior, it seems to me that it does not. The value to the state of an established moral system lies in the potential of welding all its citizens into a single ingroup. It seems plausible that a single ingroup would be more effective at achieving the common goals of a State’s citizens then a collection distinct ingroups, each of which might perceive one or more of the others as outgroups. In such cases the expression of hatred and hostility towards the outgroup(s) would likely be disruptive.
Unfortunately, established moral systems throughout history have all tended to be unstable and counterproductive. From the time Christianity became the state religion of the Roman Empire until the final fall of its Byzantine remnant, there was constant strife between Trinitarians and Anti-Trinitarians, iconodules and iconoclasts, those who accepted the Three Chapters and those who condemned them, etc. Later attempts to preserve single ingroup orthodoxy spawned the massacre of the Albigensians, the long decades of the Hussite wars, the century of intermittent warfare between the Catholics and the Huguenots in France, and many another bloody chapter in human history. Established religions became instruments of exploitation in the hands of the powerful, resulting in the bloody reprisals of the French Revolution, the Spanish Civil War, etc. A problem with established religions has always been that people cannot change deeply held beliefs at will, and they resent being forced to pretend they believe things when they don’t. Typically, force is necessary to suppress that resentment, as we have seen in modern Iran. The “right” of Freedom of Religion” is basically a recognition of these drawbacks.
The more recent secular religions have fared no better. The two most familiar examples of the 20th century, Communism and Nazism, for example, both found it necessary to brutally suppress any opposition. The “great rewards” of such religions, whether in the form of a utopian classless society or a Teutonic golden age, are worldly rather than in the great beyond, and eventually become noticeable by their absence. All moral systems have outgroups as well as ingroups and, in the case of the secular religions, these also tend to be worldly rather than spiritual. In the case of the Communists and the Nazis, this led to the mass slaughter of the “bourgeoisie” and the Jews, respectively, robbing the State of many citizens, who often happened to be among the most intelligent and productive. It would seem that these two dire examples would be enough in themselves to deter us from any further experiments along similar lines. Remarkably, however, as those who have read the books of the likes of Sam Harris and Joshua Greene are aware, we continue to cobble away on new “scientific” versions, seemingly oblivious to the outcomes of our past attempts.
As an anodyne to all these problems, the philosophers of the Enlightenment sought to limit the power of the State by establishing Rights, such as freedom of religion, freedom of speech, freedom of assembly, etc. While these Rights are not things-in-themselves, they are perceived as such. Though they are merely subjective constructs, they can still acquire legitimacy if they are generally accepted and hallowed by tradition. Democracy was held forth as the proper vehicle for promoting these rights and guarding against the abuse of power by autocratic rulers. As implemented, modern democracies have hardly been perfect, but have been more stable than autocratic forms of government, and have often, although not invariably, survived such challenges as hard economic times and war. However, their drawbacks are also clearly visible. For example, recently they have been powerless to resist the massive influx of culturally alien populations that are far more likely to be a source of future civil strife if not worse than to be of any long term benefit to the existing citizens whose welfare these democratic states are supposed to be protecting. They benefit elites as a source of votes and cheap labor, but are likely to be harmful to society as a whole in the long term. In short, the jury is still out as to whether the post-Enlightenment democracies will eventually be perceived as Good or Evil.
It is not clear what if any alternative system might actually be better than democracy. The Chinese oligarchy has certainly had remarkable success in expanding the economic and military power of that country. However, its legitimacy is based on its supposed representation of the bankrupt, foreign ideology of Marxism. In spite of that, in a traditionalist country like China it may hold onto ”the mandate of heaven” for a long time in spite of the glaring contradictions between its supposed ideology and its practice.
In general, “virtuous” states – those free of corruption, that do not cheat or steal from their citizens, and that are effective in enforcing laws that are perceived as just - are more effective at promoting the common weal than their opposites. Heraclitus’ dictum that ”character is destiny” likely applies to states as well as individuals. I personally think that states are far more likely to be “virtuous” in that sense if their powers are carefully circumscribed and limited. Whenever new moral systems are implemented, “scientific” or otherwise, those limits tend to be dissolved. When it comes to the State, it is probably better to think in terms of “Thou shalt not” than in terms of “Thou shalt.” Two that come to mind include Thou shalt not kill (except, as Voltaire suggested, in large numbers and to the sound of trumpets), and Thou shalt not torture.
Posted on November 25th, 2013 No comments
While catching up on my back issues of The Monthly Magazine, I ran across an interesting letter to the editor by one “P. W.” in the April 1801 issue. (It’s on page 220 in case you happen to have a copy lying around the house.) Many women probably had the same idea long before him, but it’s the earliest piece by a man I’ve ever run across proposing equal pay for equal work. Some excerpts:
Sir, the propriety of giving women the same pay as men, for acting with equal success in the same station, has long been so forcibly impressed upon my mind, that I cannot resist my inclination to give you the reasons for the opinion I have formed on the subject…
First, it is obvious that the absurdity of custom can never overthrow or diminish the authority of the immutable law of justice, which directs that women should receive equal rewards with men, for the same services equally performed.
Second, the sound policy of calling out the abilities of every member of the community, for the benefit of the whole, by the stimulus of an adequate reward, is a principle that should be extended to both sexes; a change that would improve the female character, and convert her present insignificancy into usefulness. The stage, the fine arts, and literary composition are the principle departments in which an equality of honour and profits are to be obtained by the competitors of either sex…
Third, humanity unites with policy, in enforcing the advantage of providing resources for women of all ranks, whereby the may gain an honourable support, when deprived of the customary protection of male relatives…
The interests of morality the abolition of the absurd and unjust depreciation of female talent, as it certainly operates as a check to the exertions of women, and tends to multiply the herd of those unhappy frail-ones, who fall a prey to seduction; and who, in their turn, become seducers, and inveigle our sons, our fathers, and our husbands, into the paths of destruction.
It took a while, but the shade of P. W. must have smiled when President Kennedy signed the Equal Pay Act into law in 1963.
As those who click on the link above will see, The Monthly Magazine was published in London from 1796 to 1843. Among other things, it published the earliest fiction of Charles Dickens.
Posted on November 20th, 2013 2 comments
According to the ever-vigilant hbd*chick, the Danish kangaroo court for scientists that goes by the moniker of the Danish Committees on Scientific Dishonesty is once again enforcing the Law of the Suspects in that unhappy land. Readers may recall its earlier adventures in suppressing the heretical writings of Bjorn Lomborg, who dared to offend the righteous by exposing real dishonesty in the environmental sciences. This time we find it hurling its pious anathemas at the head of Helmuth Nyborg, Professor Emeritus of Psychology at Aarhus University. It seems that Prof. Nyborg has been courageous or foolhardy enough to publish papers on eugenics, a field which has long been under the interdict of the pathologically pious. Once a favorite playground of what Nyborg refers to as the Academic Left, those worthies abandoned it long ago after discovering its value as a prop for their favorite sport of striking self-righteous poses.
It’s remarkable that there never seems to be a lack of candidates shameless enough to serve as inquisitors on this Danish version of the Court of Star Chamber. New ones keep turning up all the time. Apparently they live in such a hermetically sealed echo chamber that they’re unaware of the rather harsh judgment of history on their antecedents in the Halls of Justice. Such names as Torquemada, Roland Freisler, and Andrey Vishinsky come to mind. Apropos Vishinsky, according to hbd*chick, Jens Mammen, one of the three defenders of scientific righteousness responsible for bringing the Nyborg case to the baleful attention of the Danish inquisitors, was actually a Communist himself for 14 years until 1988, when all the Marxist rats began scurrying off the sinking ship. The other two include Morten Kjeldgaard, who has set up a creepy website devoted to hounding Nyborg, and Jens Kvorning, a “teaching lecturer” in Aalborg University’s Department of Communication and Psychology, an area of expertise which would seem to leave him singularly unqualified to challenge scientific results in the field of eugenics.
As far as the merits of this particular case are concerned, I can but echo hbd*chick’s quote from Steven Pinker’s letter to the Danish Thought Police:
I am writing to protest the shocking and disgraceful treatment of Dr. Helmuth Nyborg following publication of his report on possible gender differences in average IQ scores. Dr. Nyborg may be mistaken, but the issue he is addressing is a factual one, and can only be evaluated by an open examination of the evidence. To ‘investigate’ him, shut down his research, or otherwise harass him because his findings are politically incorrect is unworthy of an institution dedicated to the understanding of reality. It is reminiscent of the persecution of Galileo, the crippling of Soviet science and agriculture under Lysenko, and the attempt of the American religious right wing to inhibit the teaching of evolution in the schools.
No one has the right to legislate the truth. It can only be discovered by free inquiry, and that includes investigations that may make people uncomfortable. This is the foundation of liberal society, and it is threatened by attempts to interfere with Dr. Nyborg and his research. If he is incorrect, that will be established by a community of scholars who examine his evidence and arguments and criticize them in open forums of debate, not by the exercise of force to prevent him from pursuing his research. These are the tactics of a police state, and bring shame on any institution that uses them.
I don’t always agree with Pinker, but you have to hand it to the man. At least he has the right enemies. As for eugenics, the name may have fallen into disfavor, but the science has always carried on under different names. The main difference between Nyborg and the other practitioners is that he is courageous enough to call his specialty by its proper name. The main premise of the field is that there are significant genetic differences among both individual humans and human groups that influence the level of mental and physical performance that individuals can achieve in like circumstances. That premise would seem to be true, as demonstrated by the fact that evolution happened. The alternative view favored by the Danish inquisitors of the world, that no such human biodiversity exists, requires that all human groups, no matter how great the spatial separation, arrived at precisely equal capabilities, particularly as concerns intelligence, around 50,000 years ago, at which point our evolution came to a screeching halt, with the possible exception of certain traits such as lactose tolerance, that have been scrutinized by the Thought Police and found to be innocent of conflicts with the approved dogmas of political correctness. All this seems rather implausible, unless it is recalled that here we are speaking more of the narrative of a secular religion than anything recognizable as “science.”
Be that as it may, I must add that I am in sympathy with those who would prefer that modern states refrain from further attempts to use the science to “improve” their inmates. Such attempts in the past have been less that successful at enhancing “human flourishing.” As for individuals, we have been practicing eugenics, along with the birds, the bees, and the rest of the mammals, through our choice of mates since time immemorial. If we learn new truths and acquire new technologies that enable individuals to make similar choices in the future with more predictable results, so much the better for us. It’s only to be expected that the Danish inquisitors among us will always seek to deprive us of the right to make such choices. However, I doubt that they’ll ever be able to control “science” in every country as effectively as they do in Denmark. Just as they always have in the past, people will vote with their feet.
Posted on November 15th, 2013 1 comment
The Blank Slate may be dead, but its zombies still walk among us. One of the most persistent is the myth of the purely cultural origins of warfare. With a pedigree extending at least as far back as Rousseau’s “noble savage,” it has been refined and reinvented many times since then. According to the latest version, the ubiquity of human warfare throughout history is just an unfortunate coincidence. Its independent appearance at several places on the planet is a purely cultural artifact of the introduction of agriculture. This, of course, is very convenient, because the relevant evidence becomes both increasingly sparse and increasingly amenable to “interpretation” to fit pet theories the further one goes back in time.
John Horgan, who blogs for Scientific American, recently published a series of articles in favor of this “nurture not nature” interpretation of warfare. According to their titles, “A New Study of Foragers,” “A New Study of Prehistoric Skeletons,” and “A Survey of Earliest Human Settlements,” all “Undermine the Claim that War has Deep Evolutionary Roots.” A glance at the first of these will familiarize us with the types of “studies” we’re dealing with. The authors of the “new study of foragers” are identified as Douglas Fry and Patrik Soderberg of Abo Akademi University in Finland who, we are informed, are anthropologists. In the first place, one might ask why one would ever uncritically accept the word of any modern anthropologist for anything. Modern anthropology is more of a narrative than a science, and we have recently seen what anyone who strays from that narrative can expect in the form of the almost unbelievably vicious smears and reprisals directed at Napoleon Chagnon.
Indeed, Horgan himself was an avid participant in the Chagnon witchhunt, writing a gushing review of Patrick Tierney’s vile, “Darkness in El Dorado,” for the New York Times Review of Books. Among other things, the review set forth the kind of “evidence” Horgan finds compelling as follows:
…his book’s faults are outweighed by its mass of vivid, damning detail. My guess is that it will become a classic in anthropological literature, sparking countless debates over the ethics and epistemology of field studies. Tierney evokes Derek Freeman’s “Margaret Mead and Samoa,” which argued that Mead’s portrayal of Samoan life was just a projection of her utopian fantasies. But Mead, at worst, misrepresented her subjects; she did not incite, sicken and corrupt them: When anthropologists speak henceforth of the observer effect, the horrors documented by Tierney will be exhibit A.
It’s a classic in anthropological literature, all right. Readers interested in a somewhat different take are encouraged to read Alice Dreger’s “Darkness’s Descent on the American Anthropological Association.” In a word, the uncritical assumption that anthropologists are all so many purely objective men of science, who needn’t have the slightest fear of endangering their careers if they publish “incorrect” results is perhaps somewhat too optimistic.
With that in mind, let’s have a look at the paper Horgan cites. Entitled ”Lethal Aggression in Mobile Forager Bands and Implications for the Origins of War,” it appeared in Science in July of this year. For the purposes of their study, the authors define war as “coalitionary aggression against other groups,” and contend that it was largely absent among the forager groups they studied, including the Aranda and Tiwi of Australia; Kaska, Copper Inuit and Montagnais of North America; Botocudo of South America; !Kung, Hadza and Mbuti of Africa; and Vedda and Andamanese of South Asia. As it happens, there are accounts of at least some of these groups written by people who weren’t anthropologists at a time before their activities were regulated by modern states. For example, in the case of the aborigines of Australia, an interesting account appeared in the October 1814 edition of the British Quarterly Review. The article in question was a review of a book published by Capt. Matthew Flinders, in which he described a voyage to Australia in the years 1801, 1802 and 1803 for the purpose of “completing the discovery of that vast country.” Flinders describes the native Australians as living in family groups, which were almost constantly engaged in “coalitionary aggression” against other families. As recounted in the Quarterly,
The paucity of their numbers would not seem to be owing solely to poverty and scarcity of food. Families and relations are perpetually destroying each other either by stratagem or open combat. If one man seriously injure, but more especially if he put to death, any member of a neighboring family, all the relations of the party aggrieved think it incumbent to put the offending party or any of his relations to death, unless he be willing to expiate the offence by standing exposed to as many as may think fit to hurl their spears at him… When this species of retaliation is not resorted to, the revenge of the family injured extends to every branch of the offending family, and persons on both sides, even to the children, are put to death whenever an opportunity offers.
In the case of the Andamanese, the results of the study are even more dubious. The Andamanese are notoriously violent and aggressive towards outsiders. They routinely killed shipwrecked sailors who fell into their hands, and were bombed by the Japanese during WWII for their hostility. According to the Wiki blurb on them, “In 1974, a film crew and anthropologist Trilokinath Pandit attempted friendly contact by leaving a tethered pig, some pots and pans, some fruit and toys on the beach at North Sentinel Island. One of the islanders shot the film director in the thigh with an arrow. The following year, European visitors were repulsed with arrows.” And so on, and so on. It strikes me as somewhat surprising that none of this data ever fell in the way of Fry and Soderberg when they were collecting their evidence.
I will add another bit of evidence on foragers different from any of the groups addressed in the study. It was supplied by Robert FitzRoy, the Captain of the Beagle during the famous voyage on which Charles Darwin tagged along as the expedition’s naturalist. His story also appeared in the Quarterly, in the edition of December 1839. The foragers in question are the Patagonians of Tierra del Fuego. The ships first encounter with them went off peacefully enough, and some agreed to join the voyage as interpreters. The crew had named two of them York Minster and Jemmy Button, and FitzRoy describes their reaction on seeing others of a different tribe on shore in the distance as follows:
To them who had never seen man in his savage state – one of the most painfully interesting sights to his civilized brother – even this distant glimpse of the aborigines was deeply engaging; but York Minster and Jemmy Button asked us to fire at them, saying that they were “Oens-men – very bad men.”
According to FitzRoy’s further account,
From the concurring testimony of the three Fuegians above-mentioned, obtained from them at various times and by many different persons, it is proved that they eat human flesh upon particular occasions, namely, when excited by revenge or extremely pressed by hunger. Almost always at war with adjoining tribes, they seldom meet but a hostile encounter is the result; and then those vanquished and taken are killed and eaten by the conquerors. The arms and breast are eaten by the women; the men eat the legs; and the trunk is thrown into the sea. During a severe winter hunger impels them to lay hands on the oldest woman of their party, hold her head over a thick smoke, and choke her. They then devour every particle of the flesh, not excepting the trunk, as in the former case. Jemmy Button, in telling this horrible story as a great secret, seemed to be much ashamed of his countrymen, and said he never would do so – he would rather eat his own hands. When asked why the dogs were not eaten, he said, “Dog catch iappo” (iapo means otter).
In a word, when independent evidence is available, it is not always in perfect agreement with that of modern anthropologists. The question is, does it really matter? I doubt it. The Horgan school of anthropologists claims to be debating a strawman, or better, a legion of strawmen, of a type that exists only in their own imagination – the “genetic determinists.” Maybe such creatures actually exist, but, if so, I have never encountered one, at least among serious scholars. As Horgan puts it in one of his articles,
One of the most insidious modern memes holds that war is innate, an adaptation bred into our ancestors by natural selection. This hypothesis – let’s call it the “Deep Roots Theory of War” – has been promoted by such intellectual heavyweights as Steven Pinker, Edward Wilson, Jared Diamond, Richard Wrangham, Francis Fukuyama and David Brooks.
In fact, to the best of my knowledge none of these authors have ever said that “war is innate.” What they have said in one form or another is that, while not inevitable, warfare is a potential outcome of the expression of the innate human behavioral traits that have been loosely described as “human nature.” In general, their argument has always been that it behooves us to understand these traits if we wish to avoid warfare in the future. As the great but now largely forgotten Robert Ardrey once put it,
The command to love is as deeply buried in our nature as the command to hate. Amity – as Darwin guessed but did not explore – is as much a product of evolutionary forces as contest and enmity. In the evolution of any social species including the human, natural selection places as heavy a penalty on failure in peace as failure in battle.
In reality, then, the debate isn’t over whether or not warfare is innate, but over how it is best to be avoided. Regardless of whether warfare happened in pre-Neolithic times or not, it has certainly happened since. If there really are such things as innate human behavioral traits, then it seems to me absurd to erect a firewall between warfare and the rest of our observed behavior and claim that, while our other behaviors may be influenced by innate predispositions, warfare is purely a cultural phenomenon. That argument is really nothing but a reflexive grasping after the rotting corpse of the Blank Slate. This impression is given added weight by the habitual hostility of the Horgans of the world to anything emanating from the field of evolutionary psychology.
Horgan himself occasionally seems to recognize the real nature of the dispute. For example, in his article on Prehistoric Skeletons he writes,
Some readers might conclude based on my criticism of Deep Rooters that they are all hawks, warmongers, who think that war, because it is innate, is inevitable and perhaps even beneficial in some sense. Such views were once quite common, especially in the era of Social Darwinism. President Teddy Roosevelt once said, for example, “All the great masterful races have been fighting races. No triumph of peace is quite so great as the supreme triumph of war.” None of the Deep Rooters I have cited subscribe to such odious balderdash. All fervently hope that humanity can eradicate or at least greatly reduce the frequency of war. Deep Rooters believe that we will be better equipped to solve the problem of war if we accept the Deep Root theory. Of course, I disagree with them on this point… I would nonetheless accept the Deep Roots theory if the evidence supported it, bu the evidence points in the other direction. That is my main source of disagreement with the Deep Rooters. In the interests of constructive dialogue, however, I’m providing a link, sent to me by anthropologist and prominent Deep Rooter Richard Wrangham, to a column supporting his position. In the column, political scientist and self-described “conservative Darwinian” Larry Arnhart asserts that “explaining the evolutionary propensity to war in human nature is not to affirm this as a necessity that cannot be changed. In fact, understanding war as a natural propensity can be a precondition for understanding how best to promote peace.” Okay, so we all want peace. We just disagree on how to get there.
Again, as I’ve pointed out above, while many of the “Deep Rooters” may agree that there is a “propensity to war in human nature,” I know of none of them who have ever said without qualification that “war is innate.” In fact, the “war is innate” red herring was commonly used by the Blank Slaters of old and is still used by the leftover proponents of that ancient orthodoxy today to promote the false claim that their opponents believe that war is inevitable. Several examples of such arguments may be found in Man and Aggression, a collection of Blank Slater essays edited by Ashley Montagu. Of course, this belated pacific comment by Horgan also flies in the face of his earlier statement, cited above, to the effect that his opponents are the bearers of “an insidious meme.” As far as Horgan being receptive to the evidence of the Deep Rooters is concerned, I very much doubt it. I can sum up the reasons for my doubt in one word; Chagnon. Napoleon Chagnon published a great deal of evidence that conflicted with Horgan’s theories. His response was not exactly a disinterested and objected weighing of the facts. Rather, it was to join enthusiastically in the vilification and smearing of the bearer of that evidence.
Posted on October 22nd, 2013 2 comments
A consortium led by France’s EDF Energy, including Chinese investors, has agreed with the government of the UK on terms for building a pair of new nuclear reactors at Hinkley Point in the southwest of the country, not far from Bristol. If a final investment decision is made some time next year, and the plants are actually built, they will probably be big (about 1600 Megawatts) pressurized water reactors (PWR’s) based on the French company Areva’s EPR design. These are supposed to be (and probably are) safer, more efficient, and more environmentally friendly than earlier designs. In general, I tend to be pro-nuclear. I would certainly feel a lot safer living next to a nuclear plant than a coal plant. However, I’m a bit ambivalent about these new starts. I think we could be a lot smarter in the way we implement nuclear power programs.
Reactors of the type proposed will burn uranium. Natural uranium consists mostly of two isotopes, U235 and U238, and only U235 can be burnt directly in a nuclear reactor. Why? The answer to that question depends on something called “the binding energy of the last neutron.” Think of a neutron as a bowling ball, and the nucleus of a uranium atom as a deep well. If the bowling ball happens to roll into the well, it will drop over the edge, eventually smacking into the bottom, and releasing the energy it acquired due to the acceleration of gravity in the process. The analogous force in the nucleus of a uranium atom is the nuclear force, incomparably greater than the force of gravity, but it acts in much the same way. The neutron doesn’t notice this very short range force until it gets very close to the nucleus, or “lip of the well,” but when it does, it “falls in” and releases the energy acquired in the process in much the same way. This energy is what I’ve referred to above as “the binding energy of the last neutron.”
When this binding energy is released in the nucleus, it causes it to wiggle and vibrate, something like a big drop of water falling through the air. In the case of U235, the energy is sufficient to cause this “liquid drop” to actually break in two, or “fission.” Such isotopes are referred to as “fissile.” In U238, the binding energy of the last neutron alone is not sufficient to cause fission, but the isotope can still actually fission if the neutron happens to be moving very fast when it hits the nucleus, bringing some of its own energy to the mix. Such isotopes, while not “fissile,” are referred to as “fissionable.” Unfortunately, the isotope U235 is only 0.7 percent of natural uranium. Once it’s burnt, the remaining U238 is no longer useful for starting a nuclear chain reaction on its own.
That would be the end of the story as far as conventional reactors are concerned, except for the fact that something interesting happens to the U238 when it absorbs a neutron. As mentioned above, it doesn’t fission unless the neutron is going very fast to begin with. Instead, with the extra neutron, it becomes U239. However, U239 is unstable, and decays into neptunium 239, which further decays into plutonium 239, or Pu239. In Pu239 the binding energy of the last neutron IS enough to cause it to fission. Thus, conventional reactors burn not only U235, but also some of the Pu239 that is produced in this way. Unfortunately, they don’t produce enough extra plutonium to keep the reactor going, so only a few percent of the U238 is “burnt” in addition to the U235 before the fuel has to be replaced and the old fuel either reprocessed or stored as radioactive waste. Even though a lot of energy is locked up in the remaining U238, it is usually just discarded or used in such applications as the production of heavy armor or armor piercing munitions. In other words, the process is something like throwing a log on your fireplace, then fishing it out and throwing it away when only a small fraction of it has been burnt.
Can anything be done about it? It turns out that it can. The key is neutrons. They not only cause the U235 and Pu239 to fission, but also produce Pu239 via absorption in U238. What if there were more of them around? If there were enough, then enough new Pu239 could be produced to replace the U235 and old Pu239 lost to fission, and a much greater fraction of the U238 could be converted into useful energy. A much bigger piece of the “log” could be burnt.
As a matter of fact, what I’ve described has actually been done, in so-called breeder reactors. To answer the question “How?” it’s necessary to understand where all those neutrons come from to begin with. In fact, they come from the fission process itself. When an atom of uranium or plutonium fissions, it releases an average of something between 2 and 3 neutrons in the process. These, in turn, can cause other fissions, keeping the nuclear chain reaction going. The chances that they actually will cause another fission depends, among other things, on how fast they are going. In general, the slower the neutron, the greater the probability that it will cause another fission. For that reason, the neutrons in nuclear reactors are usually “moderated” to slower speeds by allowing them to collide with lighter elements, such as hydrogen. Think of billiard balls. If one of them hits another straight on, it will stop, transferring its energy to the second ball. Much the same thing happens in neutron “moderation.”
However, more neutrons will be produced in each fission if the neutrons aren’t heavily moderated, but remain “fast.” In fact, enough can be produced, not only to keep the chain reaction going, but to convert more U238 into useful fuel via neutron absorption than is consumed. That is the principle of the so-called fast breeder reactor. Another way to do the same thing is to replace the U238 with the more plentiful naturally occurring element thorium 232. When it absorbs a neutron, it eventually decays into U233, which, like U235, is fissile. There are actually many potential advantages to this thorium breeding cycle, such as potentially greater resistance to nuclear weapons proliferation, the ability to run the process at slower average neutron speeds, allowing smaller reactor size and easier control, less production of dangerous, long-lived transuranic actinides, such as plutonium and americium, etc. In fact, if enough neutrons are flying around, they will fission and eliminate these actinides. It turns out that’s very important, because they’re the nastiest components of nuclear waste. If they could be recycled and burned, the amount of residual radiation from the waste produced by operating a nuclear plant for 30 or 40 years could be reduced to a level below that of the original uranium or thorium ore in a matter of only a few hundred years, rather than the many thousands that would otherwise be necessary.
So breeders can use almost all the potential energy in uranium or thorium instead of just a small fraction, while at the same time minimizing problems with radioactive waste. What’s not to like? Why aren’t we doing this? The answer is profit. As things now stand, power from breeder reactors of the type I’ve just described would be significantly more expensive than that from conventional reactors like EPR. EPR’s would use enriched natural uranium, which is still relatively cheap and plentiful. They would require no expensive reprocessing step. Ask an industry spokesman, and they will generally assure you (and quite possibly believe themselves, because self-interest has always had a strong delusional effect) that we will never run out of natural uranium, that the radioactive danger from conventional reactor waste has been grossly exaggerated, and there is no long-term proliferation danger from simply discarding plutonium-laced waste somewhere and letting it decay for several thousand years. I’m not so sure.
Now, I have no problem with profit, and I find Hollywood’s obsession with the evils of large corporations tiresome, but I really do think this is one area in which government might actually do something useful. It might involve some mix of increased investment in research and development of advanced reactor technology, including the building of small demonstration reactors, continued robust support for the nuclear Navy, and eliminating subsidies on new conventional reactors. Somehow, we managed to build scores of research reactors back in the 50′s, 60′s and 70′s. It would be nice if we could continue building a few more now and then, not only for research into breeder technology, but as test beds for new corrosion and radiation resistant materials and fuels, exploration of high temperature gas-cooled reactors that could not only produce electricity but facilitate the production of hydrogen from water and synthetic natural gas from carbon dioxide and coal, both processes that are potentially much more efficient at high temperatures, and even fusion-fission hybrids if we can ever get fusion to work.
We aren’t going to run out of energy any time soon, but there are now over 7 billion people on the planet. Eventually we will run out of fossil fuels, and depending entirely on wind, solar and other renewables to take up the slack seems a little risky to me. Wasting potential fuel for the reactors of the future doesn’t seem like such a good idea either. Under the circumstances, keeping breeder technology on the table as a viable alternative doesn’t seem like a bad idea.
Posted on October 20th, 2013 5 comments
An interesting article on intelligence recently turned up in Scott Barry Kaufman’s Beautiful Minds, one of the Scientific American blogs. Entitled The Heritability of Intelligence: Not What You Think, it described a recent study of the correlation of different types of cognitive ability with IQ, and the implications regarding the importance of culture to the development of those abilities. In other words, it’s a nature versus nurture paper. Indeed, it went so far as to allude to the significance of the study concerning the issue of racial IQ differences. Can you guess, dear reader, the conclusion of the article, or at least its basic gist? Of course! The chances that the relentlessly politically correct Scientific American or any of its blogs would ever contain such a statement as, “There are significant racial differences in IQ, and genetic heritability accounts for a large component of those differences,” are about as likely as the chance that the Pope’s staff will suddenly sprout leaves. Indeed, I sometimes suspect that Scientific American subscribes to the quantum entanglement theory of intelligence, according to which, if a really smart member of one race dies, an equally smart member of every other race dies at precisely the same moment, regardless of their spatial separation, to maintain exact parity between the IQ of the races.
And, true to form, the entirely predictable burden of the article was that culture accounts for the apparent IQ differences between blacks and whites. That made the following bit from the article all the more surprising:
To be clear: these findings do not mean that differences in intelligence are entirely determined by culture. Numerous researchers have found that the structure of cognitive abilities is strongly influenced by genes (although we haven’t the foggiest idea which genes are reliably important). What these findings do suggest is that there is a much greater role of culture, education, and experience in the development of intelligence than mainstream theories of intelligence have assumed. Behavioral genetics researchers– who parse out genetic and environmental sources of variation– have often operated on the assumption that genotype and environment are independent and do not covary. These findings suggests they very much do.
There’s one more really important implication of these findings, which I’d be remiss if I didn’t mention.
Black-White Differences in IQ Test Scores
In his analysis of the US Army data, the British psychometrician Charles Spearman noticed that the more a test correlated with IQ, the larger the black-white difference on that test. Years later, Arthur Jensen came up with a full-fledged theory he referred to as “Spearman’s hypothesis: the magnitude of the black-white differences on tests of cognitive ability are directly proportional to the test’s correlation with IQ. In a controversial paper in 2005, Jensen teamed up with J. Philippe Rushton to make the case that this proves that black-white differences must be genetic in origin.
But these recent findings by Kees-Jan Kan and colleagues suggest just the opposite: The bigger the difference in cognitive ability between blacks and whites, the more the difference is determined by cultural influences.
Of course, as anyone who has actually read Jensen’s work is aware, he explicitly supported a correlation between culture and IQ. And, of course, the author is evoking stark, nature-nurture divides where none exist in a fashion that would certainly bring a scowl to the face of orthodox evolutionary psychologists. But beyond all that, what’s really stunning here is the author’s suggestion that the heritably of black/white intelligence differences is somehow the “orthodox” or mainstream point of view. Doesn’t he actually read Scientific American himself?
After all, when Murray and Herrnstein published The Bell Curve, with its claim that IQ is 40% to 80% heritable, the SA review of their book called them racists.
After all, In October 1973 a half-page advertisement entitled “Resolution Against Racism” appeared in the New York Times. With over 1000 academic signatories, it condemned “racist research”, denouncing in particular Jensen, Shockley and Herrnstein.
After all, The American Anthropological Association convened a panel discussion in 1969 at its annual general meeting, shortly after the appearance of Jensen’s first paper on the heritability of intelligence, where several participants labelled his research as “racist”.
After all, in a review of The Bell Curve, Steven Rosenthal referred to their work as “Academic Nazism.”
I could go on and on. In a word, other than the absurd implication that “behavioral genetics researchers” claim that intelligence and culture do not co-vary (by all means, if anyone knows one, please name her/him), and other than the equally absurd implication that Jensen and Rushton believed that, because intelligence was, in part heritable, it was therefore uninfluenced by culture, quite apart from all that, the notion that the theoretical heritability of black/white intelligence differences is “mainstream” is ludicrous. In fact, the “mainstream,” orthodox position, constantly reinforced in the popular as well as scientific literature, not to mention the pages of Scientific American, is that Jensen, Shockley, and Herrnstein, and those who agree with them, are deliberately evil racist miscreants.
Heaven forefend that I should ever stray from that orthodoxy by a jot or a tittle. I do, however, think it would be quite interesting, though, of course, grossly immoral, if the dictator of some sub-Saharan country in Africa were to implement a draconian program of eugenics, exclusively for his country’s black population, promoting high IQ. Suppose it were actually possible to keep it going for 200 or 300 years, and it actually succeeded (in spite of the fact that we don’t have “the foggiest idea” of where the relevant genes are, and because a = b, and b = c, it would quite clearly be mathematically impossible)? At that point it would become necessary for the editors of Scientific American, at least in that country, to begin publishing articles proving that the lower IQ of whites compared to blacks was entirely an artifact of culture. It might actually be quite amusing.
And, at the risk or provoking completely unwarranted accusations of political incorrectness, I might add that I wish it really would become as orthodox as Mr. Kaufman suggests to study inherited IQ differences between human groups, and even to come up with a useful metric for measuring the same. True, it might offend some people, but, among other things, it might be quite useful as a tool for assessing the relative merits of the new moral systems that are cropping up these days. We have certainly felt the lack of such a tool in the past.
In fact, the “covariance” between morality and intelligence has become quite pronounced in recent times. This is particularly true of one of the “new-fashioned” moralities, of the type we are constantly assured we need to replace the old ones in the name of promoting “human flourishing.” The one I have in mind is Marxism, and never did such a new secular religion, complete with a revolutionary new morality, introduce itself to the world with more extravagant promises of the “human flourishing” to come. That’s where the usefulness of the proposed metric comes in. I would maintain, quite apart from what was promised, that one of the most remarkable aspects of the reality of Marxist “human flourishing” that we have now been fortunate enough to witness has been the decapitation of at least two countries; the former Soviet Union and Cambodia.
In round numbers, 25 million of a population of something under 200 million in the Soviet Union, and two million of a population of around seven million in Cambodia, were shot, starved, or tortured to death in these two countries in the interest of promoting “human flourishing.” These millions were not randomly chosen. They were, in fact, an instance of reverse eugenics in action. The historical source material is there in abundance for anyone who cares to look. Read, for example, Solzhenitsyn’s The Gulag Archipelago, or Survival in the Killing Fields by Haing Ngor and Roger Warner. In both cases, the victims came disproportionately from the ranks of each nation’s best and brightest; its scientists, its engineers, its literary and philosophical intelligentsia, and anyone else who happened to be educated beyond the mean.
It seems to me wildly implausible that these events had no significant impact on the heritable cognitive abilities of the populations of these two nations, whether in the form of IQ or any other plausible measure. Would not a metric of exactly what these effects were be extremely useful in helping us decide whether the whole project of coming up with yet another wonderful new morality is really in our best interests or not? Who knows, we might find out that there are actually better ways to promote “human flourishing” after all.
Posted on October 19th, 2013 No comments
Who says there’s no such thing as German humor? Take, for example, some of the comments left by Teutonic wags after an article about the recent fusion “breakthrough” reported by scientists at Lawrence Livermore National Laboratory working on the National Ignition Facility (NIF). One of the first was left by one of Germany’s famous “Greens,” who was worried about the long term effects of fusion energy. Very long term. Here’s what he had to say:
So nuclear fusion is green energy, is it? The opposite is true. Nuclear fusion is the form of energy that guarantees that any form of Green will be forever out of the question. In comparison, Chernobyl is a short-lived joke! Why? Have you ever actually considered what will be “burned” with fusion energy? Hydrogen, one of the two components of water, (and a material without which life is simply impossible)! Nuclear fusion? I can already see the wars over water coming. And, by the way, the process is irreversible. Once hydrogen is fused, it’s gone forever. Nothing and no one will ever be able to make water out of it ever again!
I’m not kidding! The guy was dead serious. Of course, this drew a multitude of comments from typical German Besserwisser (better knowers), such as, “If you don’t have a clue, you should shut your trap.” However, some of the other commenters were more light-hearted. for example,
No, no, no. What eu-fan (the first commenter) doesn’t seem to understand is that this should be seen as a measure against the rise in sea level that will result from global warming. Less hydrogen -> less water -> reduced sea level -> everything will be OK.
Another hopeful commenter adds,
…if it ever actually does succeed, this green fusion, can we have our old-fashioned light bulbs back?
Noting that the fusion of hydrogen produces helium, another commenter chimes in,
So, in other words, if a fusion reactor blows up, the result will be a global bird cage: The helium released will make us all talk like Mickey Mouse!
In all seriousness, the article in Der Spiegel about the “breakthrough” wasn’t at all bad. The author actually bothered to ask a local fusion expert, Sibylle Günter, Scientific Director of the Max Planck Institute for Plasma Physics, about Livermore’s “breakthrough.” She replied,
The success of our colleagues (at Livermore) is remarkable, and I don’t want to belittle it. However, when one speaks of a “breakeven point” in the classical sense, in which the fusion energy out equals the total energy in, they still have a long way to go.
That, of course, is entirely true. The only way one can speak of a “breakthough” in the recent NIF experiments is by dumbing down the accepted definition of “ignition” from “fusion energy out equals laser energy in” to “fusion energy out equals energy absorbed by the target,” a much lower amount. That didn’t deter many writers of English-language reports, who couldn’t be troubled to fact check Livermore’s claims with the likes of Dr. Günter. In some cases the level of fusion wowserism was extreme. For example, according to the account at Yahoo News,
After fifty years of research, scientists at the National Ignition Facility (NIF) in Livermore, have made a breakthrough in harnessing and controlling fusion.
According to the BBC, NIF conducted an experiment where the amount of energy released through the fusion reaction was more than the amount of energy being absorbed by it. This process is known as “ignition” and is the first time it has successfully been done anywhere in the world.
I’m afraid not. The definition of “ignition” that has been explicitly accepted by scientists at Livermore is “fusion energy out equals laser energy in.” That definition puts them on a level playing field with their magnetic fusion competitors. It’s hardly out of the question that the NIF will reach that goal, but it isn’t there yet. Not by a long shot.
Posted on October 14th, 2013 2 comments
There are still objective moralists – lots of them. Of course, billions of people on the planet are objective moralists because they believe in God, but that’s the trivial case. I’m not referring to them. I’m referring to the legions of philosophers, ethicists, and moralists who sawed that particular branch off long ago, and yet imagine they can still sit on it. It reminds me of an old “Itchy and Scratchy” episode on “The Simpsons.” Itchy tears out Scratchy’s heart and hands it to him as a valentine. Scratchy is charmed, and carries on as if nothing were amiss until he happens to read the bold headline in his newspaper, “You Need a Heart to Live!” So it is with the objective moralists. They insist that their treasured object needs neither a heart nor a God to exist. It exists because they say so, and after all, they are the experts. More importantly, it exists because they would not at all approve of a world in which it didn’t.
An interesting example of the genre recently turned up in the pages of The New Atlantis in the form of an article entitled, The Evolutionary Ethics of E. O. Wilson. It was penned by Whitley Kaufman, a professor of philosophy at the University of Massachusetts Lowell. Kaufman is also an objective moralist, and his article is intended as a refutation of E. O. Wilson’s “evolutionary ethics.” He informs us that “the discipline of evolutionary ethics can be divided into two broad camps.” Supposedly Wilson belongs to the first camp, which “views evolutionary explanations of morality as a way to improve our understanding of what is moral and to put ethical claims on a stronger foundation.” However, Kaufman finally gets around to telling us where he stands in describing the second camp:
But there is a second, more radical school of thought in evolutionary ethics. This view holds that evolutionary biology, rather than providing a basis for improving or modernizing ethics, shows that the idea of objective ethical rules is inherently mistaken.
Returning to the same theme a bit later he writes,
…the discovery that ethical values have been shaped by evolution should not necessarily have any dire implications for the objective status of ethical claims.
That might well be true if there were even the faintest basis for the “objective status of ethical claims.” In fact, there is none, and Kaufman makes no effort to supply one. Objective moralists seldom do. It seems to them that the Good and Evil objects that dance before their eyes are so light that they can float about in the ether without support. It’s a common illusion among those who have reached terminal velocity as gravity pulls them crashing down to earth.
By all means, read Kaufman’s essay from end to end. You will search in vain for any justification of the claim that there is such a thing as objective morality. Instead, you will find a very typical mélange of appeals to emotion, moralistic posing, and insistences that, because the author wouldn’t like it if there were no objective morality, therefore objective morality must exist.
For example, in a section entitled Disquieting Precedents, he dangles familiar bugaboos before our eyes. They include Social Darwinism, eugenics, and, of course, the Nazis. These are all, supposedly, the misshapen children of evolutionary ethics. In a nutshell, the argument goes like this: I feel really, really strongly that Social Darwinism, eugenics, and Nazism are evil. It would be really, really outrageous for anyone to believe that Social Darwinism, eugenics, and Nazism are good. Therefore, it follows that Social Darwinism, eugenics, and Nazism are objectively evil. Using similar logic, one can easily prove the existence of a God. After all, if God didn’t exist, we couldn’t go to heaven after we die, the bad people we resent wouldn’t go to hell, and our prayers for our favorite football team would never be answered. Therefore, there must be a God.
A little later, Kaufman puts this “it just can’t be” argument into an even simpler form. Taking issue with Wilson he writes,
In his 1986 essay “Moral Philosophy as Applied Science,” written with philosopher Michael Ruse, he (E. O. Wilson) argues that we now understand that we have been “deceived by our genes” into believing that morality objectively binds us, that there is a real right versus wrong.
This view is best characterized as a form of moral nihilism, the idea that moral obligations do not exist. Wilson tries to avoid the nihilistic position by insisting that the illusion of right and wrong is so deeply built into us that even recognizing it as an illusion will not likely make a difference in our behavior. But committed moral nihilists reject this response: realizing that moral claims are illusions surely means that moral claims are false. There is, under this view, no real ethical difference between the actions of the vilest criminal and the most virtuous saint.
In other words, we have the following additional arguments for objective morality: a) I don’t like moral nihilists at all, and, since moral nihilists deny the existence of objective right and wrong, therefore objective right and wrong must exist, b) I don’t at all like the idea that there is no objective moral difference between the vilest criminal and the most virtuous saint, so there must be an objective moral difference between them, and, c) It would be a great shame if the mirage of a cool spring of water and palm trees shimmering ahead of me on the desert floor weren’t real. Therefore they must be real. Do any of these arguments make sense to you? They certainly don’t to me. A bit further on Kaufman writes,
There are stronger grounds than Wilson offers, however, for rejecting the moral nihilism that some say is a consequence of evolutionary biology. Consider an analogy with mathematics and science. Like our ability to think about the morality of our actions, the cognitive abilities underlying mathematics and science are in some sense products of evolution. But this fact has no significant implications regarding our ability to objectively study mathematics or physics, and it certainly does not imply that numbers, molecules, or, for that matter, genes, brains, and bodies studied by evolutionary biologists are fictions. Likewise, the discovery that ethical values have been shaped by evolution should not necessarily have any dire implications for the objective status of ethical claims… To try to do ethics without genuine values and prescriptive moral principles is like trying to do science without recourse to facts and observations.
There’s a novel proof for you. Objective Good and Evil must exist because Prof. Kaufman requires them to do his job. Actually, I’m entirely willing to believe in genuine values and prescriptive moral principles if Professor Kaufman could just catch one in his butterfly net and bring it in for me to observe. That’s really where his ox is gored. If there is no objective morality, people like him really have nothing to teach us, other than their opinions tarted up as “objects.” I’m sorry about that, but the fact doesn’t alter reality one bit. According to Kaufman,
In order to fully comprehend human nature, there must always be a place for philosophy, history, literary studies, and even theology – disciplines that complement the natural sciences and fill in the picture of the human being as a free and rational agent.
I personally don’t care what discipline my knowledge comes from. You can call it science, or philosophy, or history, or whatever you like. But regardless of where it comes from, I must insist that if people make assertions about objects that are supposed to exist independently of their subjective minds, they provide some data, some actual evidence that those objects exist. Absent such data, but with plenty of data demonstrating that those “objects” are just what E. O. Wilson says they are – subjective illusions – I will continue in the belief that they are just that.
Evolved behavioral predispositions are the ultimate reason for the existence of human morality. Absent those predispositions, our morality as we know it would cease to exist. In my opinion, that is the simple truth. It will remain the truth whether its implications are unpleasant to the Kaufmans of the world or not. Social Darwinism, eugenics, and Nazism are obviously possible, though hardly inevitable, outcomes if people engage in faulty reasoning about what they should do in response to their moral emotions. If we really want to avoid such outcomes in the future, wouldn’t it be advisable to understand the truth about our moral emotions and where morality comes from? It seems to me that would be wiser than attempting to ban them by insisting that everyone believe in imaginary objects. That would amount to insisting that we repeat the same mistakes over again. After all, there were no stronger believers in objective morality than the Nazis unless, perhaps, it was the Communists. For them, the ultimate, objective Good was the welfare of the German Volk. They tolerated no moral relativism on that score whatsoever. For the Communists, the objective Good was achieving the future classless utopia. They, too, allowed no moral relativism touching on that ultimate goal. It seems to me that the lesson we really should have learned from Nazism and Communism is that such illusions of objective Good can be very dangerous, and we should be wary of anyone who comes along trying to peddle a new and improved version.
There is no reason we will cease to be moral beings because we have finally learned to understand morality. Just as E. O. Wilson said, it is our nature to be moral beings. If there be moral nihilists who assume they can break the rules because the rules are conventions rather than objects, we will continue to punish them just as we have always punished such moral nihilists in the past. I, for one, will have no problem with that. However, it seems to me that the interactions of modern nation states armed with nuclear weapons bear little resemblance to those that prevailed during the long period over which the behavioral traits we associate with morality evolved. Under the circumstances it seems to me imprudent to regulate those interactions with reference to imaginary Good and Evil objects. We did, after all, have some rather unpleasant experiences during the last century trying to do just that. Let us refrain from compounding the error by attempting to repeat those experiments. I have very little faith in the efficacy of the vaunted intelligence of our species. However, it seems to me that in such cases we should leave off trying to cobble together new moral systems and actually try to be reasonable.
As for Good and Evil objects, I am not intransigent. I am entirely willing to believe in them. All I ask is that Professor Kaufman rope one and show it to me.
Posted on October 10th, 2013 No comments
It has always seemed plausible to me that some clever scientist(s) might find a shortcut to fusion that would finally usher in the age of fusion energy, rendering the two “mainstream” approaches, inertial confinement fusion (ICF) and magnetic fusion, obsolete in the process. It would be nice if it happened sooner rather than later, if only to put a stop to the ITER madness. For those unfamiliar with the field, the International Thermonuclear Experimental Reactor, or ITER, is a gigantic, hopeless, and incredibly expensive white elephant and welfare project for fusion scientists currently being built in France. In terms of pure, unabashed wastefulness, think of it as a clone of the International Space Station. It has always been peddled as a future source of inexhaustible energy. Trust me, nothing like ITER will ever be economically competitive with alternative energy sources. Forget all your platitudes about naysayers and “they said it couldn’t be done.” If you don’t believe me, leave a note to your descendants to fact check me 200 years from now. They can write a gloating refutation to my blog if I’m wrong, but I doubt that it will be necessary.
In any case, candidates for the hoped for end run around magnetic and ICF keep turning up, all decked out in the appropriate hype. So far, at least, none of them has ever panned out. Enter two stage laser fusion, the latest pretender, introduced over at NextBigFuture with the assurance that it can achieve “10x higher fusion output than using the laser directly and thousands of times better output than hitting a solid target with a laser.” Not only that, but it actually achieved the fusion of boron and normal hydrogen nuclei, which produces only stable helium atoms. That’s much harder to achieve than the usual deuterium-tritium fusion between two heavy isotopes of hydrogen, one of which, tritium, is radioactive and found only in tiny traces in nature. That means it wouldn’t be necessary to breed tritium from the fusion reactions just to keep them going, one of the reasons that ITER will never be practical.
Well, I’d love to believe this is finally the ONE, but I’m not so sure. The paper describing the results NBF refers to was published by the journal Nature Communications. Even if you don’t subscribe, you can click on the figures in the abstract and get the gist of what’s going on. In the first place, one of the lasers has to accelerate protons to high enough energies to overcome the Coulomb repulsion of the stripped (of electrons) boron nuclei produced by the other laser. Such laser particle accelerators are certainly practical, but they only work at extremely high power levels. In other words, they require what’s known in the business as petawatt lasers, capable of achieving powers in excess of a quadrillion (10 to the 15th power) watts. Power comes in units of energy per unit time, and such lasers generally reach the petawatt threshold by producing a lot of energy in a very, very short time. Often, we’re talking picoseconds (trillionths of a second).
Now, you can do really, really cool things with petawatt lasers, such as pulling electron positron pairs right out of the vacuum. However, their practicality as drivers for fusion power plants, at least in their current incarnation, is virtually nil. The few currently available, for example, at the University of Rochester’s Laboratory for Laser Energetics, the University of Texas at Austin, the University of Nevada at Reno, etc., are glass lasers. There’s no way they could achieve the “rep rates” (shot frequency) necessary for useful energy generation. Achieving lots of fusions, but only for a few picoseconds, isn’t going to solve the world’s energy problems.
As it happens, conventional accelerators can also be used for fusion. As a matter of fact, it’s a common way of generating neutrons for such purposes as neutron radiography. Unfortunately, none of the many fancy accelerator-driven schemes for producing energy that people have come up with over the years has ever worked. There’s a good physical reason for that. Instead of using their energy to overcome the Coulomb repulsion of other nuclei (like charges repel, and atomic nuclei are all positively charged), and fuse with them, the accelerated particles prefer to uselessly dump that energy into the electrons surrounding those nuclei. As a result, it has always taken more energy to drive the accelerators than could be generated in the fusion reactions. That’s where the “clever” part of this scheme comes in. In theory, at least, all those pesky electrons are gone, swept away by the second laser. However, that, too, is an energy drain. So the question becomes, can both lasers be run efficiently enough and with high enough rep rates and with enough energy output to strip enough boron atoms to get enough of energy out to be worth bothering about, in amounts greater than that needed to drive the lasers? I don’t think so. Still, it was a very cool experiment.
Posted on October 9th, 2013 No comments
It’s an ill wind that blows nobody good. The government shutdown is making life miserable for federal employees and government contractors, but it’s the greatest show on earth for students of human nature. Ingroup/outgroup behavior, first described in formal academic terms by Sir Arthur Keith, Freudianized by Ardrey as the Amity/Enmity Complex, and probably the most important “root cause” of human warfare throughout the ages, can be observed in its crudest forms on both the left and the right of the ideological spectrum. As Keith put it,
Seeing that all social animals behave in one way to members of their own community and in an opposite manner to those of other communities, we are safe in assuming that early humanity, grouped as it was in the primal world, had also this double rule of behavior. At home they applied Huxley’s ethical code, which is Spencer’s code of amity; abroad their conduct was that of Huxley’s cosmic code, which is Spencer’s code of enmity.
To judge by the partisan political blogs, Keith’s early humanity was quite successful in passing those traits along to later humanity. Typically, commenters in such forums are firmly convinced that the “others” are not just misguided and mistaken. They are perceived as disgusting, stupid, and deliberately evil. For example, from the comment section of the left wing Talking Points Memo in response to an article about who’s to blame for the shutdown,
The hillbilly homegrown terrorists know what they’re doing, and the whole point is to shut the country down and cause pain for the people they don’t like (aka “other Americans”).
Republicans don’t believe in polls. Or science. Or math. Or medicine… I’m not really sure what they do believe in besides the “invisible hand” of the market and the concept that an “invisible man in the sky” has chosen them specifically to spread gun-love and lecture poor people, minorities and women on why they are inferior.
Republicans hate the black man in the White House and that hatred trumps everything else. They will force default just to spite us.
170 years ago it was whether a certain race of people should be enslaved; today it is whether our entire population should be shackled and softly enslaved by an overly oppressive government.
My (grand)father’s Democrat Party didn’t have Marxist garbage like Van Jones nor feral types like Alan Grayson. Since I would never answer to such garbage we’re back in 1850′s America.
I do think a tipping point has been reached when you have the specter of the federal government literally barricading (a deliberate solecism on the right, playing on the President’s name) state roads, to prevent people from getting a view, a corrupt attorney general literally stepping into states, to disallow them to institute basic voting safeguards. A labor department literally blocking the gates of a company in South Carolina, because they didn’t pay proper respects to another political group. This government is harming real people now, for no more reason than thin skinnedness and spite.
Notice the symmetry? As usual, both sides can see the other’s spite and hatred, but not their own. For example, from the Powerline comments,
Conservatives think that liberals are wrong, but liberals think that conservatives are evil.
Readers of Jonathan Haidt’s The Righteous Mind will recognize his “inner lawyer” at work here, busily rationalizing moral emotions. As usual, Haidt’s “rational tail” is hard at work wagging the “emotional dog.”
We are less than a year away from the centennial anniversary of the start of World War I, and there are some interesting commonalities between the shutdown and that conflict. Since humans were/are involved in both conflicts, it is not surprising to find the human traits associated with status seeking and dominance figuring as important factors in both. Students of history will recall that the relevant suite of emotions went by the name of “honor” in World War I. Prior to that conflict, Austria-Hungary had directly challenged Russia by annexing Bosnia-Herzegovina in 1908. Russia, recently weakened by the 1905 Revolution and defeat in the Russo-Japanese War, had been forced to back down, and was perceived, both by herself and others, as having acted weakly. In 1914, Austria challenged Russia again, mobilizing against her ally, Serbia, and shelling Belgrade. This time, however, Russia was determined not to lose face. Her leaders imagined all sorts of dire consequences if she backed down again, ignored the immeasurably more dire consequences of not backing down, and ordered their own army to mobilize. That got the ball rolling in Germany, and the rest is history.
In the last government shutdown during the Clinton Administration, the Republicans also backed down, in part because of polls showing the public blamed them for the mess, and were humbled just as the Russians were in 1908. Now the Republicans face their own 1914. In what they probably perceive in more or less the same way as the Russians perceived the shelling of Belgrade, their political enemies forced through a program they bitterly opposed without a single Republican vote. Now they, too, refuse to back down, and the Democrats are just as determined not to lose face. One must hope that the outcome won’t be quite as drastic for the Republicans and Democrats as it was for Russia and Austria-Hungary.