Helian Unbound

The world as I see it
RSS icon Email icon Home icon
  • Women in Ranger School! What’s with That?!

    Posted on August 25th, 2015 Helian No comments

    No doubt if you’ve been following this story you’ve noticed the howls coming from the usual quarters.  Take it with a grain of salt.  There’s no reason for Ranger School to be “dumbed down” for women to pass if the mission is anything like it was when I went through.  Upper body strength isn’t critical for Ranger type missions, and that includes rock/mountain climbing, rappelling, etc.  The premium is on being able to pack a serious load under adverse conditions of sleep and food deprivation, tolerate extremes of heat and cold, and “function” when you’re assigned to lead small unit operations.  A strong woman can do all those things.  There are ranger units, but the ranger and airborne badges are also marks of prestige.  Women should have the right to wear them if they can handle the physical and mental challenges.

    When I went through, they found out I was a good swimmer, so I was appointed “far shore lifeguard” of my ranger training unit.  That meant I had to strip and carry a rope across streams when we came to them and make a rope bridge so the others could shinny across without getting wet.  I suppose I would have left my jockey shorts on if women were around.  Once the ranger sergeants just had me stand in freezing water up to my neck and pull the short guys across a deep spot by the webbing on their helmets.  When I finally got out I couldn’t straighten out from hypothermia.  We still had to wade through 600 yards of cypress swamp, but I was very fortunate to find a big fire when we finally got through that.  It’s probably the closest I’ve ever come to dying, and years later four guys did die of hypothermia.  They’re probably more careful now.  The next day it was so cold (in Florida!) that my sodden fatigues started freezing on my body.  Fortunately we all had two pair, so I stripped them off, buried them, and put on my dry ones.  No doubt they’re still down there rotting away somewhere.

    We got one C ration a day (we’re talking about the ancient times before MREs), and were starving at the end of the swamp phase.  The first signs were dizziness when you went from a prone position to standing up.  Some of the guys started hallucinating at the end.  My ranger buddy seriously believed he was standing in line at Mama Leoni’s Restaurant in New York City at one point.  I got kind of worried about him.

    Occasionally Huey helicopters would pick us up to take us from one mission to another, with “aggressors” usually waiting for us when we reached our landing zones.  We would sit on either side with our legs dangling over the edge holding onto a little strap “seat belt.”  Those chopper pilots were crazy, and I could swear my feet brushed against the top of the forest foliage one time.  By that time I was so dazed it didn’t bother me.

    In those days we parachuted in to Eglin for the swamp phase from a combat training altitude of 800 feet.  Actual combat jumps were from 500 feet.  After my stick landed I looked up and saw the next one jump.  One guy’s chute was nothing but a ball of silk.  His reserve caught him a fraction of a second before he would have hit the ground.  When a later class jumped in a guy died when he landed on an old concrete airstrip and fell backwards, driving the rim of his steel helmet into his neck.  There were a few “laig” (leg, non-airborne infantry) rangers around, but they were rare.

    I didn’t notice any powder rooms when we were on the march.

    Airborne-Ranger-Tabs-with-Wings

  • Panksepp, Animal Rights, and the Blank Slate

    Posted on August 23rd, 2015 Helian 2 comments

    So who is Jaak Panksepp?  Have a look at his YouTube talk on emotions at the bottom of this post, for starters.  A commenter recommended him, and I discovered the advice was well worth taking.  Panksepp’s The Archaeology of Mind, which he co-authored with Lucy Biven, was a revelation to me.  The book describes a set of basic emotional systems that exist in all, or virtually all, mammals, including humans.  In the words of the authors:

    …the ancient subcortical regions of mammalian brains contain at least seven emotional, or affective, systems:  SEEKING (expectancy), FEAR (anxiety), RAGE (anger), LUST (sexual excitement), CARE (nurturance), PANIC/GRIEF (sadness), and PLAY (social joy).  Each of these systems controls distinct but specific types of behaviors associated with many overlapping physiological changes.

    This is not just another laundry list of “instincts” of the type often proposed by psychologists at the end of the 19th and the beginning of the 20th centuries.  Panksepp is a neuroscientist, and has verified experimentally the unique signatures of these emotional systems in the ancient regions of the brain shared by humans and other mammals.  Again quoting from the book,

    As far as we know right now, primal emotional systems are made up of neuroanatomies and neurochemistries that are remarkably similar across all mammalian species.  This suggests that these systems evolved a very long time ago and that at a basic emotional and motivational level, all mammals are more similar than they are different.  Deep in the ancient affective recesses of our brains, we remain evolutionarily kin.

    If you are an astute student of the Blank Slate phenomenon, dear reader, no doubt you are already aware of the heretical nature of this passage.  That’s right!  The Blank Slaters were prone to instantly condemn any suggestion that there were similarities between humans and other animals as “anthropomorphism.”  In fact, if you read the book you will find that their reaction to Panksepp and others doing similar research has been every bit as allergic as their reaction to anyone suggesting the existence of human nature.  However, in the field of animal behavior, they are anything but a quaint artifact of the past.  Diehard disciples of the behaviorist John B. Watson and his latter day follower B. F. Skinner, Blank Slaters of the first water, still haunt the halls of academia in significant numbers, and still control the message in any number of “scientific” journals.  There they have been following their usual “scholarly” pursuit of ignoring and/or vilifying anyone who dares to disagree with them ever since the heyday of Ashley Montagu and Richard Lewontin.  In the process they have managed to suppress or distort a great deal of valuable research bearing directly on the wellsprings of human behavior.

    We learn from the book that the Blank Slate orthodoxy has been as damaging for other animals as it has been for us.  Among other things, it has served as the justification for indifference to or denial of the feelings and consciousness of animals.  The possibility that this attitude has contributed to some rather gross instances of animal abuse has been drawing increasing attention from those who are concerned about their welfare.  See for example, the website of Panksepp admirer Temple Grandin.  According to Panksepp & Bevin,

    Another of Descartes’ big errors was the idea that animals are without consciousness, without experiences, because they lack the subtle nonmaterial stuff from which the human mind is made.  This notion lingers on today in the belief that animals do not think about nor even feel their emotional responses.

    Many emotion researchers as well as neuroscience colleagues make a sharp distinction between affect and emotion, seeing emotion as purely behavioral and physiological responses that are devoid of affective experience.  They see emotional arousal as merely a set of physiological responses that include emotion-associated behaviors and a variety of visceral (hormonal/autonomic) responses, without actually experiencing anything – many researchers believe that other animals may not feel their emotional arousals.  We disagree.

    Some justify this rather counter-intuitive belief by suggesting that it is impossible to really experience or be conscious of emotions (affects) without language.  Panksepp & Bevins’ response:

    Words cannot describe the experience of seeing the color red to someone who is blind.  Words do not describe affects either.  One cannot explain what it feels like to be angry, frightened, lustful, tender, lonely, playful, or excited, except indirectly in metaphors.  Words are only labels for affective experiences that we have all had – primary affective experiences that we universally recognize.  But because they are hidden in our minds, arising from ancient prelinguistic capacities of our brains, we have found no way to talk about them coherently.

    With such excuses, and the fact that they could not “see” feelings and emotions in their experiments with “reinforcement” and “conditioning,” the behaviorists concluded that the feelings of the animals they were using in their experiments didn’t matter.  It was outside the realm of “science.”  Again from the book,

    Much as we admire the scientific finesses of these conditioning experiments, we part company with (Joseph) LeDoux and many of the others who conduct this kind of work when it comes to understanding what emotional feelings really are.  This is because they studiously ignore the feelings of their animals, and they often claim that the existence or nonexistence of the animals’ feelings is a nonscientific issue (although there are some signs of changing sentiments on these momentous issues).  In any event…, LeDoux has specifically endorsed the read-out theory – to the effect that affects are created by neocortical working-memory functions, uniquely expanded in human brains.  In other words, he see affects as a higher-order cognitive construct (perhaps only elaborated in humans), and thereby he envisions the striking FEAR responses of his animals to be purely physiological effects with no experiential consequences.

    …And when we analyze the punishing properties of electrical stimulation here in animals, we get the strongest aversive responses imaginable at the lowest levels of brain stimulation, and humans experience the most fearful states of mind imaginable.  Such issues of affective experience should haunt fear-conditioners much more than they apparently do.

    The evidence strongly indicates that there are primary-process emotional networks in the brain that help generate phenomenal affective experiences in all mammals, and perhaps in many other vertebrates and invertebrates.

    It’s stunning, really.  Anyone who has ever owned a dog is aware of how similar their emotional responses can often be to those of humans, and how well they remember them.  Like humans, they are mammals.  Like humans, their brains include a cortex.  It would hardly be “parsimonious” to simply assume that humans represent some kind of a radical departure when it comes to the ability to experience and remember emotions, and that other animals lack this ability, in defiance of centuries of such “common sense” observations that they can.  All this mass of evidence apparently isn’t “scientific,” and therefore doesn’t count, because these latter day Blank Slaters can’t observe in their mazes and shock boxes what appears obvious to everyone else in the world.  “Anthropomorphism!”  From such profound reasoning we are apparently to conclude that pain in animals doesn’t matter.

    Why the Blank Slate’s furious opposition to “anthropomorphism?”  In a sense, it’s actually an anachronism.  Recall that the fundamental dogma of the Blank Slate was the denial of human nature.  Obviously other mammals have a “nature.”  Clearly, the claim that dogs and cats must “learn” all their behavior from their “culture” was never going to fly.  Not so human beings.  Once upon a time the Blank Slaters claimed that everything in the human behavioral repertoire, with the possible exception of breathing, urinating, and defecating, was learned.  They even went so far as to include sex.  Even orgasms had to be “learned.”  It follows that the gulf between humans and animals had to be made as wide as possible.

    Fast forward to about the year 2000.  As far as their denial of human nature was concerned, the Blank Slaters had lost control of the popular media.  To an increasing extent, they were also losing control of the message in academia.  Books and articles about innate human behavior began pouring from the presses, and people began speaking of human nature as a given.  The Blank Slaters had lost that battle.  The main reason for their “anthropomorphism” phobia had disappeared.  In the more sequestered field of “animal nature,” however, they could carry on as if nothing had happened without making laughing stocks of themselves.  No one was paying any attention except a few animal rights activists.  And carry on they did, with the same “scientific” methods they had used in the past.  Allow me to quote from Panksepp & Biven again to give you a taste of what I’m talking about:

    It is noteworthy that Walter Hess, who first discovered the RAGE system in the cat brain in the mid-1930s (he won a Nobel Prize for his work in 1949), using localized stimulation of the hypothalamus, was among the first to suggest that the behavior was “sham rage.”  He confessed, however, in writings published after his retirement (as noted in Chapter 2:  e.g., The Biology of Mind [1964]), that he had always believed that the animals actually experienced true anger.  He admitted to having shared sentiments he did not himself believe.  Why?  He simply did not want to have his work marginalized by the then-dominant behaviorists who had no tolerance for talk about emotional experiences.  As a result, we still do not know much about how the RAGE system interacts with other cognitive and affective systems of the brain.

    In an earlier chapter on The Evolution of Affective Consciousness they added,

    In his retirement he admitted regrets about having been too timid, not true to his convictions, to claim that his animals had indeed felt real anger.  He confessed that he did this because he feared that such talk would lead to attacks by the powerful American behaviorists, who might thereby also marginalize his more concrete scientific discoveries.  To a modest extent, he tried to rectify his “mistake” in his last book, The Biology of Mind, but this work had little influence.

    So much for the “self-correcting” nature of science.  It is anything but that when poisoned by ideological dogmas.  Panksepp and Biven conclude,

    But now, thankfully, in our enlightened age, the ban has been lifted.  Or has it?  In fact, after the cognitive revolution of the early 1970s, the behaviorist bias has largely been retained but more implicitly by most, and it is still the prevailing view among many who study animal behavior.  It seems the educated public is not aware of that fact.  We hope the present book will change that and expose this residue of behaviorist fundamentalism for what it is:  an anachronism that only makes sense to people who have been schooled within a particular tradition, not something that makes any intrinsic sense in itself!  It is currently still blocking a rich discourse concerning the psychological, especially the affective, functions of animal brains and human minds.

    This passage is particularly interesting because it demonstrates, as can be seen from the passage about “the cognitive revolution of the early 1970s,” that the authors were perfectly well aware of the larger battle with the Blank Slate orthodoxy over human nature.  However, that rather opaque allusion is about as close as they came to referring to it in the book.  One can hardly blame them for deciding to fight one battle at a time.  There is one interesting connection that I will point out for the cognoscenti.  In Chapter 6, Beyond Instincts, they write,

    The genetically ingrained emotional systems of the brain reflect ancestral memories – adaptive affective functions of such universal importance for survival that they were built into the brain, rather than having to be learned afresh by each generation of individuals.  These genetically ingrained memories (instincts) serve as a solid platform for further developments in the emergence of both learning and higher-order reflective consciousness.

    Compare this with a passage from the work of the brilliant South African naturalist Eugene Marais, which appeared in his The Soul of the Ape, written well before his death in 1936, but only published in 1969:

    …it would be convenient to speak of instinct as phyletic memory.  There are many analogies between memory and instinct, and although these may not extend to fundamentals, they are still of such a nature that the term phyletic memory will always convey a clear understanding of the most characteristic attributes of instinct.

    As it happens, the very charming and insightful introduction to The Soul of the Ape when it was finally published in 1969 was written by none other than Robert Ardrey!  He had an uncanny ability to find and appreciate the significance of the work of brilliant but little-known researchers like Marais.

    As for Panksepp, I can only apologize for taking so long to discover him.  If nothing else, his work and teachings reveal that this is no time for complacency.  True, the Blank Slaters have been staggered, but they haven’t been defeated quite yet.  They’ve merely abandoned the battlefield and retreated to what would seem to be their last citadel; the field of animal behavior.  Unfortunately there is no Robert Ardrey around to pitch them headlong out of that last refuge, but they face a different challenge now.  They can no longer pretend to hold the moral high ground.  Their denial that animals can experience and remember their emotions in the same way as humans leaves the door wide open for the abuse of animals, both inside and outside the laboratory.  It is to be hoped that more animal rights activists like Temple Grandin will start paying attention.  I may not agree with them about eating red meat, but the maltreatment of animals, justified by reference to a bogus ideological dogma, is something that can definitely excite my own RAGE emotions.  I will have no problem standing shoulder to shoulder with them in this fight.

  • On the Red Meat Morality Inversion

    Posted on August 18th, 2015 Helian 4 comments

    Dwight Furrow recently posted an article at 3 Quarks Daily entitled “In Defense of Eating Meat.”  His first paragraph reads,

    There are many sound arguments for drastically cutting back on our consumption of meat—excessive meat consumption wastes resources, contributes to climate change, and has negative consequences for health. But there is no sound argument based on the rights of animals for avoiding meat entirely.

    As far as the first sentence is concerned, I have no problem with rationally discussing the pros and cons of meat consumption as long as the emotional whims behind the reasons are laid on the table.  I certainly agree with the second sentence, for the same reason cited by Westermarck more than a century ago; there is no such thing as objective morality, and it is therefore not subject to truth claims.  Furrow “kind of” sees it that way, but not quite.  Indeed, the core of his argument is very revealing.  It exposes all the ambivalence of the modern moral philosopher who understands the evolutionary origins of morality, but can’t bear to accept the consequences of that truth.  It reads as follows:

    Singer’s argument is based on the idea that animals have moral status because they suffer. As a utilitarian he may not be comfortable using “rights” talk but it surely fits here. He thinks animals have a right to equal consideration. But animals cannot have moral rights, simply because the treatment of animals falls outside the scope of our core understanding of morality. Morality is not a set of principles written in the stars. Morality arises, because as human beings, we need to cooperate with each other in order to thrive, and such cooperation requires trust.  The institution of morality is a set of considerations that helps to secure the requisite level of trust to enable that cooperation. That is why morality is a stable evolutionary development. It enhances the kind of flourishing characteristic of human beings. Rights, then, are entitlements that determine what a right-holder may demand of others that we decide to honor in order to maintain the requisite level of social trust.

    We are not similarly dependent on the trustworthiness of animals. (Pets are a special case which is why we don’t eat them). Our flourishing does not depend on getting cows, tigers, or shrimp to trust us or we them, and thus we have no reciprocal moral relations with them. From the standpoint of human flourishing there simply is no reason to confer moral rights on animals.

    Lovers of boneless ribeye steaks may well wish to simply accept this as it stands.  Any port in a storm, right?  Unfortunately, I’m a bit more fastidious than that.  Before plunging ahead, however, a bit of background on the debate might be useful.  Perhaps the best known crusader against the consumption of red meat is Peter Singer.  His Animal Liberation: A New Ethics for Our Treatment of Animals, published in 1975, has been, as Wiki puts it, “a formative influence on leaders of the modern animal liberation movement.”  His arguments are based on his conclusion that the particular flavor of utilitarianism he favored at the time constituted an objective guide for establishing the legitimacy of truth claims about the rights of animals.  As Furrow points out, the basic claim of the Utilitarians is that “only overall consequences matter in assessing the moral quality of an action.”  The most coherent statement of this philosophy was probably John Stuart Mill’s Utilitarianism, published in 1863.  That was probably too early for the moral consequences of Darwin’s Origin, published in 1859, to sink in.  I seriously doubt that Mill himself would have been a Utilitarian if he had lived a century later.  He was too smart for that.  Mill explicitly denied any belief in objective morality, noting that mankind had been struggling to find such an objective standard since the time of Socrates.  In his words,

    To inquire how far the bad effects of this deficiency have been mitigated in practice, or to what extent the moral beliefs of mankind have been vitiated or made uncertain by the absence of any distinct recognition of an ultimate standard, would imply a complete survey and criticism of past and present ethical doctrine. It would, however, be easy to show that whatever steadiness or consistency these moral beliefs have attained, has been mainly due to the tacit influence of a standard not recognized.

    I think Mill would have grasped where the “standard not recognized” really came from if there had been time for the consequences of Darwin’s great theory to really dawn on him.  Not so Singer, who apparently either never read or never appreciated Mill’s own reservations about his moral philosophy when he wrote his book, and treated utilitarianism as some kind of a moral gold standard.

    Which brings us back to Furrow’s counter-arguments.  Note in the above quote that he recognizes that morality is both subjective and an “evolutionary development.”  From that point, however, he wanders off into an intellectual swamp.  If morality is an evolutionary development, then it is quite out of the question that it arose, “…because as human beings, we need to cooperate with each other in order to thrive, and such cooperation requires trust.”  Evolution is not driven by needs, nor does it serve any purpose.  Robert Ardrey put it very succinctly in his bon mot, “Birds do not fly because they have wings.  They have wings because they fly.”  According to Furrow, “The institution of morality is a set of considerations that helps to secure the requisite level of trust to enable that cooperation.”  No, evolution didn’t somehow create an “institution of morality” consisting of “a set of considerations.”  Rather, it resulted in a set of behavioral responses in the form of emotions and feelings.  In other words, it produced the “moral sense” whose existence was demonstrated by Francis Hutcheson a century and a half before Darwin.  These emotions and feelings have their analogs in other animals.  We can only “consider” what they might mean after we have experienced them.  Had we not experienced them to begin with there would be nothing to consider, and therefore no morality.  Morality is a fundamentally emotional behavioral phenomena, and not some cognitively distilled laundry list of legalistic prescriptions for developing trust so we can cooperate with each other.

    Furrow goes on to claim that animals cannot have rights because our “flourishing” does not depend on trusting them.  However, that can only be true if it is also true that the “purpose” of morality and therefore the “goal” of evolution was to promote “human flourishing,” which is nonsense.  “Rights” are subjective emotional constructs that we commonly delude ourselves into perceiving as real things.  It follows that any metric of their objective legitimacy when applied to animals is entirely equivalent to their objective legitimacy when applied to humans; zero.

    My own opinion on the eating of red meat is not based on any claim that I understand the “purpose” of moral emotions better than Singer.  Rather, it is based on the observation that morality exists because it has made our genetic survival more probable.  It therefore seems to me that interpreting our moral emotions in a way that makes our survival less likely is a characteristic of a dysfunctional biological unit.  In other words, it is what I call a morality inversion.  Establishing artificial moral taboos against the eating of red meat or any other food that might increase our chances of survival in the event that there’s not enough food to go around strikes me as just such a morality inversion.  It is based on the wildly improbable assumption that there will always be enough food to go around, in spite of the continuing increase of the human population, and in spite of the fact that such a state of affairs has often been more the exception rather than the rule throughout human history.  In other words, it amounts to turning morality against itself.

    There is nothing objectively wrong about morality inversions.  It’s just that an aversion to them happens to be one of my personal whims.  I like the idea of my own continued genetic survival and the continued survival of the human race because it seems to me to be in harmony with the reasons we happen to exist to begin with.  As a result, I have a negative emotional response to moral systems that accomplish the opposite.  In other words, according to my cognitive interpretation of my own subjective moral emotions, eating red meat is “good,” and morally induced vegetarianism is “evil.”  As I said, it’s just a whim, but I see no reason why my whims should take a back seat to anyone else’s, and that’s all Singer’s infinitesimally elaborated version of utilitarianism really amounts to.  Indeed, I’m encouraged by the hope that there are others who also place a certain value on survival, and therefore share my whims.  I note in passing that they by no means coincide with the notion of “human flourishing” that currently prevails in the academy.

     

  • Haidt and Lukianoff on “The Coddling of the American Mind”

    Posted on August 16th, 2015 Helian 1 comment

    Jonathan Haidt and Greg Lukianoff have just published an article in The Atlantic entitled The Coddling of the American Mind that illustrates yet another pathological artifact of the bitter determination of our species to preserve the fantasy of objective morality.  They describe the current attempts of university students to enforce ideological orthodoxy by vilifying “microaggressions” against the true faith and insisting on “trigger warnings” to insure the pure in heart will not be traumatized by allusions to Crimethink.

    Haidt has done some brilliant work on the nature of human morality, and his The Righteous Mind is a must read for anyone with a serious interest in understanding the subject.  Lukianoff is the CEO of the Foundation for Individual Rights in Education, which supports free-speech rights on campus.  According to his own account, his interest in the subject was catalyzed by his personal struggle with depression.

    According to the article,

    A movement is arising, undirected and driven largely by students, to scrub campuses clean of words, ideas, and subjects that might cause discomfort or give offense.

    Examples of the phenomena they describe may be found here, here, and here.  The authors note that,

    The press has typically described these developments as a resurgence of political correctness.  That’s partly right, although there are important differences between what’s happening now and what happened in the 80s and 90s.  That movement sought to restrict speech… but it also challenged the literary, philosophical and historical canon, seeking to widen it by including more diverse perspectives.  The current movement is largely about emotional well-being.  More than the last, it presumes an extraordinary fragility of the collegiate psyche, and therefore elevates the goal of protecting students from psychological harm.

    Which brings them to the theme of their article – that this “vindictive protectiveness,” as they call it is not protecting students from psychological harm, but is actually causing it.  The rest of the article mainly consists of the claim that modern students are really the victims of some of the dozen “common cognitive distortions” that are listed at the end of the article, and suggests that “cognitive behavioral therapy,” which helped Lukianoff overcome his own struggle with severe depression, might be something they should try as well.

    Maybe, but I suspect that the real motivation behind this latest “movement” has no more to do with “preventing psychological harm” than “preventing discomfort or giving offense” were the real motivations behind the PC movement, which, BTW, is at least as active now as it was in the 80s and 90s.  Rather, these phenomena are best understood as modern versions of the ancient game of moralistic one-upmanship.  In other words, they’re just a mundane form of status seeking behavior.  As usual, by taking them seriously one just plays into the hands of the status seekers.

    If you’d like to see what the phenomenon looked like in the 60’s, and didn’t happen to be around at the time, I recommend you have a look at the movie Getting Straight, starring Elliot Gould and Candice Bergen.  It’s all about the campus revolutions of the incredibly narcissistic and self-righteous Baby Boomers who were the “youth” of the time.  Encouraged by their doting parents, they imagined themselves the bearers of all worldly wisdom, guaranteed to be the creators of a future utopia.  Their doting (and despised) parents, of the generation that transcended the Great Depression, defeated the Nazis and fascists in World War II, and fended off Communism long enough for it to collapse of its own “internal contradictions” so their children had the opportunity to stage cathartic but entirely safe “revolutions,” are uniformly portrayed as idiots, standing in the way of “progress.”  The “student revolutionaries” in the film are every bit as pious as their modern analogs, and have equally idiotic demands.  The Brave New World will apparently only be possible if they are allowed to have coed dorms and gender and ethnic studies programs at every university, presumably to insure they will have no marketable skills, and so will be able to continue the “revolution” after they graduate.  The amusing thing about the film, especially in retrospect, is that its creators weren’t intentionally creating a comedy.  They actually took themselves seriously.

    For earlier versions of “vindictive protectiveness” most of us must turn to the history books.  The great Sage of Baltimore, H. L. Mencken, devoted a great deal of his time and energy to fighting its manifestation as the “Uplift” of his own day.  Many examples may be found in his six volumes of Prejudices, or his autobiographical Trilogy.  The latest version, BTW, has string bookmarks, just like the old family Bibles.  I’m sure the old infidel would have been amused.

    Shakespeare loathed the “vindictive protectiveness” of his day, which came in its first really modern version; Puritanism.  See, for example, his Twelfth Night, which scorns the “morally righteous” of his day as personified by that “devil of a Puritan,” Malvolio.  For cruder, less culturally evolved versions, one can go back to the Blues and Greens of the Byzantine circus, or the Christian squabbles over assorted flavors of heresy in the 3rd, 4th and 5th centuries.

    In a word, there is nothing new under the sun.  I’m sure Haidt realizes this.  After all, he devotes much of The Righteous Mind to describing and analyzing the phenomenon.  He plays along with the “cognitive behavioral therapy” stuff, because that’s what blows his co-author’s hair back, but still gets enough in between the lines to describe what is really going on.  For example, from the article,

    A claim that someone’s words are “offensive” is not just an expression of one’s own subjective feeling of offendedness.  It is, rather, a public charge that the speaker has done something objectively wrong.  It is a demand that the speaker apologize or be punished by some authority for committing an offense.

    And that really is, and always has been, the crux of the problem.  Nothing can be “objectively wrong.”  The origin of all these sublime and now microscopically distilled moral emotions will probably eventually be found in the ancient portions of our brains that we share with every other mammal, and probably the reptiles as well.  This “root cause” of moral behavior exists because it evolved.  It did not evolve because of its efficacy in fending off microaggressions, or to insure that Mesozoic mammals would be sure to issue trigger warnings.  No, it evolved for the somewhat unrelated reason that it happened to increase the odds that certain “selfish genes,” or perhaps “selfish groups,” if you believe E. O. Wilson, would survive, and pass on the relevant DNA to latter day animals with big brains, namely, us.  We get into trouble like this by over-thinking what our reptile brains are trying to tell us.  The problem will never go away until our self-knowledge develops to the point that we finally grasp this essential truth.  Until that great day dawns, we will have to grin and bear the annoyance of dealing with the pathologically and delicately self-righteous among us.  Roll out the fainting couches!

    fainting

  • More Ardreyania, with Pinker and CRISPR

    Posted on August 11th, 2015 Helian No comments

    Robert Ardrey is the one man the “men of science” in the behavioral disciplines would most like to see drop down the memory hole for good.  Mere playwright that he was, he was presumptuous enough to be right about the existence of human nature when all of them were wrong, and influential enough to make them a laughing stock among educated laypeople for denying it.  They’ve gone to great lengths to make him disappear ever since, even to the extreme of creating an entire faux “history” of the Blank Slate affair.  I, however, having lived through the events in question, and still possessed of a vestigial respect for the truth, will continue to do my meager best to set the record straight.  Indeed, dear reader, I descended into the very depths to glean material for this post, so you won’t have to.  In fine, I unearthed an intriguing Ardrey interview in the February 1971 issue of Penthouse.

    The interview was conducted in New York by Harvey H. Segal, who had served on the editorial board of the New York Times from 1968 to 1969, and was an expert on corporate economics.  The introductory blurb noted the obvious to anyone who wasn’t asleep at the time; that the main theme of all Ardrey’s work was human nature.

    Equipped only with common sense, curiosity, and a practiced pen, Robert Ardrey shouldered his way into the study of human nature and has given a new direction to man’s thinking about man.

    and

    An impact on this scale is remarkable for any writer, but in Ardrey’s case it has the added quality of being achieved in a second career.

    As usual, in this interview as in every other contemporary article and review of his work that I’ve come across, there is no mention of his opinion on group selection.  It will be recalled that Ardrey’s favorable take on this entirely ancillary subject in his book The Social Contract was seized on by Steven Pinker as the specious reason he eventually selected to announce that Ardrey had been “totally and utterly wrong.”  There is much of interest in the interview but, as it happens, Ardrey’s final few remarks bear on the subject of my last post; artificial manipulation of human DNA.

    In case you haven’t read it, that post discussed some remarks on the ethical implications of human gene manipulation by none other than – Steven Pinker.  According to Pinker the moral imperative for the bioethicists who were agonizing over possible applications of such DNA-altering tools as CRISPR-Cas9 was quite blunt; “Get out of the way.”  Their moral pecksniffery should not be allowed to derail the potential of these revolutionary tools for curing or alleviating a great number of genetically caused diseases and disorders or its promise of “vast increases in life, health, and flourishing.”  Pinker dismisses concerns about the possible misuse of the technology as follows:

    A truly ethical bioethics should not bog down research in red tape, moratoria, or threats of prosecution based on nebulous but sweeping principles such as “dignity,” “sacredness,” or “social justice.” Nor should it thwart research that has likely benefits now or in the near future by sowing panic about speculative harms in the distant future. These include perverse analogies with nuclear weapons and Nazi atrocities, science-fiction dystopias like “Brave New World’’ and “Gattaca,’’ and freak-show scenarios like armies of cloned Hitlers, people selling their eyeballs on eBay, or warehouses of zombies to supply people with spare organs. Of course, individuals must be protected from identifiable harm, but we already have ample safeguards for the safety and informed consent of patients and research subjects.

    That smacks a bit of what the German would call “Verharmlosung” – insisting that something is harmless when it really isn’t.  Tools like CRISPR certainly have the potential for altering DNA in ways not necessarily intended to merely cure disease.  For example, many intelligence related genes have already been found, and new ones are being found on a regular basis.  Alterations in genes that influence human behavior are also possible.  Ardrey had a somewhat more sober take on the subject in the interview referred to above.  For example,

    Segal:  What about the possibility of altering the brain and human instincts through new advances in genetics, DNA and the like?

    Ardrey:  I don’t have much faith.  Altering of the human being is something to approach with the greatest apprehension because it depends on what kind of human being you want.  It is not so long since H. J. Muller, one of the greatest American geneticists and one of the first eugenicists, was saying that we have to eliminate aggression.  But now there is (Konrad) Lorenz who says that aggression is the basis of almost all life.  Reconstruction of the human being by human beings is too close to domestication, like control of the breeding of animals.  Muller’s plan for the human future was dealing with sheep.  I happen to be one who works best at being something other than a sheep, and I think most people do.

    and a bit later, on the prospect of curing disease:

    I see some important things that might be done with DNA on a very simple scale, such as repairing an error in, say, a hemophiliac – one of those genetic errors that appear at random every so often.  But that is making a thing normal.  It is not impossible that some genetically-caused disease, particularly if it has a one-gene basis, might be fixed.  But genes are like a club or political party with all sorts of jostling and jockeying between them.  You change one and a bell rings at the other end of the line.

    I tend to agree with Ardrey that there is a strong possibility that CRISPR and similar tools will be misused.  However, I also agree with Pinker that the bioethicists are only likely to succeed in stalling the truly beneficial applications, and the most “moral” course for them will be to step aside.  The dangers are there, but they are dangers the bioethicists are most unlikely to have the power to do anything about.

    At the individual level, parents interested in enhancing the intelligence, athletic prowess, or good looks of their offspring will seize the opportunity to do so, taking the moralists with a grain of salt in the process, and if the technology is there, the opportunity to create “designer babies” will be there as well for those rich enough to afford it.  Even more worrisome is the potential misuse of the technology by state actors.  As Ardrey pointed out, they may well take a much greater interest in the ancient bits of the brain that control our feelings, moods and behavior than in the more recently added cortical enhancements responsible for our relatively high intelligence.

    In a word, what we face is less a choice than a fait accompli.  Like nuclear weapons, the technology will eventually be applied in ways the bioethicists are likely to find very disturbing.  It’s not a question of if, but when.  The end result of this new era of artificially accelerated evolution will certainly be interesting for those lucky enough to be around to witness it.

    Robert Ardrey

    Robert Ardrey

  • Steven Pinker on “The moral imperative for bioethics”

    Posted on August 9th, 2015 Helian No comments

    According to Steven Pinker in The moral imperative for bioethics, an opinion piece he recently wrote for the Boston Globe,

    …the primary moral goal for today’s bioethics can be summarized in a single sentence.  Get out of the way.

    I would strengthen that a bit to something like, “Stop the mental masturbation and climb back into the real world.”  At some level Pinker is aware of the fact that bioethicists and other “experts” in morality are not nearly as useful to the rest of us as they think they are.  He just doesn’t understand why.  As a result he makes the mistake of conceding the objective relevance of morality in solving problems germane to the field of biotechnology.  The fundamental problem is that these people are chasing after imaginary objects, things that aren’t real.  They have bamboozled the rest of us into taking them seriously because we have been hoodwinked by our emotional baggage just as effectively as they have.  There is no premium on reality as far as evolution is concerned.  There is a premium on survival.  We perceive “good” and “evil” as real objects, not because they actually are real objects, but because our ancestors were more likely to pass on the relevant genes if they perceived these fantasies as real things.  Bioethics is just one of the many artifacts of this delusion.

    Consider what the bioethicists are really claiming.  They are saying that mental impressions that exist because they happened to improve the evolutionary fitness of a species of advanced, highly social, bipedal apes correspond to real things, commonly referred to as “good” and “evil,” that have some kind of an objective existence independent of the minds of those creatures.  Not only that, but if one can but capture these objects, which happen to be extremely elusive and slippery, one can apply them to make decisions in the field of biotechnology, which didn’t exist when the mental equipment that gives rise to the impressions in question evolved.  Consider these extracts from the online conversation:

    Carl Elliot, at his blog, Fear and Loathing in Bioethics,

    Forget Tuskegee. Forget Willowbrook and Holmesburg Prison. Pay no attention to the research subjects who died at Kano, Auckland Women’s Hospital or the Fred Hutchinson Cancer Center. Never mind about Jesse Gelsinger, Ellen Roche, Nicole Wan, Tracy Johnson or Dan Markingson. According to Steven Pinker, “we already have ample safeguards for the safety and informed consent of patients and research subjects.”  So bioethicists should just shut up about abuses and let smart people like him get on with their work.

    Pinker:

    Indeed, biotechnology has moral implications that are nothing short of stupendous. But they are not the ones that worry the worriers.

    Julian Savulescu at the Practical Ethics website:

    What we need is less obstruction of good and ethical research, as Pinker correctly observes, and more vigilance at picking up unethical research. This requires competent, professional and trained bioethicists and improvement of ethics review processes.

    Daniel K. Sokol, also at Practical Ethics:

    The idea that research that has the potential to cause harm should be subject to ethical review should not be controversial in the 21st century. The words “this project has been reviewed and approved by the Research Ethics Committee” offers some reassurance that the welfare of participants has been duly considered. The thought of biomedical research without ethical review is a frightening one.

    Pinker:

    A truly ethical bioethics should not bog down research in red tape, moratoria, or threats of prosecution based on nebulous but sweeping principles such as “dignity,” “sacredness,” or “social justice.”

    One imagines oneself in Bedlam.  These people are all trying to address what most people would agree is a real problem.  They understand that most people don’t want to be victims of anything like the Tuskegee experiments.  They also grasp the fact that most people would prefer to live longer, healthier lives.  True, these, too, are merely subjective goals, whims if you will, but they are whims that most of us will agree with.  The whims aren’t the problem.  The problem is that we are trying to apply a useless tool to reach the goals; human moral emotions.  We are trying to establish truths by consulting emotions to which no truth claims can possibly apply.  Stuart Rennie got it right in spite of himself in his attack on Pinker at his Global Bioethics Blog:

    My first reaction was: how is this new bioethics skill taught? Should there be classes that teach it in a stepwise manner, i.e. where you first learn not to butt in, then how to just step a bit aside, followed by somewhat getting out of the way, and culminating in totally screwing off? What would the syllabus look like? Wouldn’t avoiding bioethics class altogether be a sign of success?

    Pinker, too, iterates to an entirely rational final sentence in his opinion piece:

    Biomedical research will always be closer to Sisyphus than a runaway train — and the last thing we need is a lobby of so-called ethicists helping to push the rock down the hill.

    I, too, would prefer not to be a Tuskegee guinea pig.  I, too, would like to live longer and be healthier.  I simply believe that emotional predispositions that exist because they happen to have been successful in regulating the social interactions within and among small groups of hunter-gatherers millennia ago, are unlikely to be the best tools to achieve those ends.

    bedlam

  • Morality Inversions

    Posted on August 8th, 2015 Helian No comments

    The nature of morality and the reason for its existence have been obvious for more than a century and a half.  Francis Hutcheson demonstrated that it must arise from a “moral sense” early in the 18th century.  Hume agreed, and suggested the possibility that there may be a secular explanation for the existence of this moral sense.  Darwin demonstrated the nature of this secular explanation for anyone willing to peak over the blindfold of faith and look at the evidence.  Westermarck climbed up on the shoulders of these giants, gazed about, and summarized the obvious in his brilliant The Origin and Development of the Moral Ideas.  In short, good and evil have no objective existence.  They are subjective artifacts of behavioral predispositions that exist because they evolved.  Absent that evolved “moral sense,” morality as we know it would not exist.  It evolved because it happened to increase the probability that the genes responsible for its existence would survive and reproduce.  There exists no mechanism whereby those genes can jump out of the DNA of one individual, grab the DNA of another individual by the scruff of the neck, and dictate what kind of behavior that other DNA should regard as “good” or “evil.”

    In the years since Darwin and Westermarck our species has amply demonstrated its propensity to ignore such inconvenient truths.  Once upon a time religion provided some semblance of a justification for belief in an objective “good-in-itself.”  However, latter day “experts” on ethics and morality have jettisoned such anachronisms, effectively sawing off the branch they were sitting on.  Then, with incomparable hubris, they’ve claimed a magical ability to distill objective “goods” and “evils” straight out of the vacuum they were floating in.  In our own time the result is visible as a veritable explosion of abstruse algorithms, incomprehensible to all but a few academic scribblers, for doing just that.  Encouraged by these “experts,” legions of others have indulged themselves in the wonderfully sweet delusion that the particular haphazard grab bag of emotions they happened to inherit from their ancestors provided them with an infallible touchstone for sniffing out “real good” and “real evil.”  The result has been an orgy of secular piety that the religious Puritans of old would have shuddered to behold.

    The manifestations of this latter day piety have been bizarre, to say the least.  Instead of promoting genetic survival, they accomplish precisely the opposite.  Genes that are the end result of an unbroken chain of existence stretching back billions of years into the past now seem intent on committing suicide.  It’s not surprising really.  Other genes gave rise to an intelligence capable of altering the environment so fast that the rest couldn’t possibly keep up.  The result is visible in various forms of self-destructive behavior that can be described as “morality inversions.”

    A classic example is the belief that it is “immoral” to have children.  Reams of essays, articles, and even books have been written “proving” that, for various reasons, reproduction is “bad-in-itself.”  If one searches diligently for the “root cause” of all these counterintuitive artifacts of human nature, one will always find them resting on a soft bed of moral emotions.  What physical processes in the brain give rise to these moral emotions, and how, exactly, do they predispose us to act in some ways, but not others?  No one knows.  It’s a mystery that will probably remain unsolved until we unravel the secret of consciousness.  One thing we do know, however.  The emotions exist because they evolved, and they evolved because they enhanced the odds that the genes that gave rise to them would reproduce; or at least they did in a particular environment that no longer exists.  In the vastly different environment we have now created for ourselves, however, they are obviously capable of promoting an entirely different end, at least in some cases; self destruction.

    Of course, self destruction is not objectively evil because nothing is objectively evil.  Neither is it unreasonable, because, as Hume pointed out, reason by itself cannot motivate us to do anything.  We are motivated by “sentiments” or “passions” that we experience because it is our nature to experience them.  These include the moral passions.  Self destruction is a whim, and reason can be applied to satisfy the whim.  I happen to have a different whim.  I see myself as a link in a vast chain of millions of living organisms, my ancestors, if you will.  All have successfully reproduced, adding another link to the chain.  Suppose I were to fail to reproduce, thus becoming the final link in the chain and announcing, in effect, to those who came before me and made my life possible that, thanks to me, all their efforts had ended in a biological dead end.  In that case I would see myself as a dysfunctional biological unit or, in a word, sick, the victim of a morality inversion.  It follows that I have a different whim; to reproduce.  And so I have.  There can be nothing that renders my whims in any way objectively superior to those of anyone else.  I merely describe them and outline what motivates them.  I’m not disturbed by the fact that others have different whims, and choose self destruction.  After all, their choice to remove themselves from the gene pool and stop taking up space on the planet may well be to my advantage.

    Another interesting example of a morality inversion is the deep emotional high so many people in Europe and North America seem to get from inviting a deluge of genetically and culturally alien immigrants to ignore the laws of their countries and move in.  One can but speculate on the reasons that the moral emotions, mediated by culture as they always are, result in such counterintuitive behavior.  There is, of course, such a thing as human altruism, and it exists because it evolved.  However, that evolutionary process took place in an environment that made it likely that such behavior would enhance the chances that the responsible genes would survive.  People lived in relatively small ingroups surrounded by more or less hostile outgroups.  We still categorize others into ingroups and outgroups, but the process has become deranged.  Thanks to our vastly expanded knowledge of the world around us combined with vastly improved means of communication, the ingroup may now be perceived as “all mankind.”

    Except, of course, for the ever present outgroup.  The outgroup hasn’t gone anywhere.  It has merely adopted a different form.  Now, instead of the clan in the next territory over, the outgroup may consist of liberals, conservatives, Christians, Moslems, atheists, Jews, blacks, whites, or what have you.  The many possibilities are familiar to anyone who has read a little history.  Obviously, the moral equipment in our brains doesn’t have the least trouble identifying the population of Africa, the Middle East, or Mexico as members of the ingroup, and citizens of one’s own country who don’t quite see them in that light as the outgroup.  In that case, anyone who resists a deluge of illegal immigrants is “evil.”  If they point out that similar events in the past have led to long periods of ethnic and/or religious strife, occasionally culminating in civil war, or any of the other obvious drawbacks of uncontrolled immigration, they are simply shouted down with the epithets appropriate for describing the outgroup, “racist” being the most familiar and hackneyed example.  In short, a morality inversion has occurred.  Moral emotions have become dysfunctional, promoting behavior that will almost certainly be self-destructive in the long run.  I may be wrong of course.  The immigrants now pouring into Europe and North America without apparent limit may all eventually be assimilated into a big, happy, prosperous family.  I seriously doubt it.  Wait and see.

    One could cite many other examples.  The faithful, of course, have their own versions, such as removing themselves from the gene pool by acting as human bombs, often taking many others with them in the process.  The “good” in this case is the delusional prospect of enjoying the services of 70 of the best Stepford wives ever heard of in the afterlife.  Regardless, the point is that the evolved emotional baggage that manifests itself in so many forms as human morality has been left in the dust.  It cannot possibly keep up with the frenetic pace of human social and technological progress.  The result is morality inversions; behaviors that accomplish more or less the opposite of what they did in the environment in which they evolved.  Under the circumstances, the practice of allowing people to wallow in their moral emotions, insisting that they have a monopoly on the “good” and anyone who opposes them is “evil” is becoming increasingly problematic.  As noted above, I don’t have a problem with these people voluntarily removing themselves from the gene pool.  I do have a problem with becoming collateral damage.

  • Nuclear Update: Molten Salt, Rugby Balls, and the Advanced Hydrodynamic Facility

    Posted on August 7th, 2015 Helian No comments

    I hear at 7th or 8th hand that the folks at DOE have been seriously scratching their heads about the possibility of building a demonstration molten salt reactor.  They come in various flavors, but the “default” version is a breeder, capable of extracting far more energy from a given quantity of fuel material than current reactors by converting thorium into fissile uranium 233.  As they would have a liquid core, the possibility of a meltdown would be eliminated.  The copious production of neutrons in such reactors would make it possible to destroy the transuranic actinides, such as americium and curium, and, potentially, also some of the most long-lived radioactive products produced in fission reactions.  As a result, the residual radioactivity from running such a reactor for, say, 30 years, would potentially be less than that of the ore from which the fuel was originally extracted in under 500 years, a far cry from the millions commonly cited by anti-nuke alarmists.  Such reactors would be particularly attractive for the United States, because we have the largest proven reserves of thorium on the planet.  Disadvantages include the fact that uranium 233 is a potential bomb material, and therefore a proliferation concern, and the highly corrosive nature of the fluoride and/or chloride “salts” in the reactor core.  More detailed discussions of the advantages and disadvantages may be found here and here.

    The chances that the U.S. government will actually provide the funding necessary to build a molten salt or any other kind of advanced reactor are, unfortunately, slim and none.  We could do such things in the 50’s and 60’s with alacrity, but those days are long gone, and the country seems to have fallen victim to a form of technological palsy.  That’s too bad, because private industry won’t take up the slack.  To the extent they’re interested in nuclear at all, the profit motive rules.  At the moment, the most profitable way to generate nuclear energy is with reactors that simply burn naturally occurring uranium, wasting the lion’s share of the potential energy content, and generating copious amounts of long-lived radioactive waste for which no rational long term storage solution has yet been devised.  In theory, DOE’s national laboratories should be stepping in to take up the slack, doing the things that industry can’t or won’t do.  In reality what they do is generate massive stacks of paper studies and reports on advanced systems that have no chance of being built.  Enough must have accumulated since the last research reactor was actually built at any of the national labs to stretch back and forth to the moon several times.  Oh, well, we can take comfort in the knowledge that at least some people at DOE are thinking about the possibilities.

    Moving right along, as most of my readers are aware, the National Ignition Facility, or NIF, did not live up to its name.  It failed to achieve inertial confinement fusion (ICF) ignition in the most recent round of experiments, missing that elusive goal by nearly two orders of magnitude.  The NIF is a giant, 192 beam laser system at Lawrence Livermore National Laboratory (LLNL) that focuses all of its 1.8 megajoules of laser energy on a tiny target containing deuterium and tritium, two heavy isotopes of hydrogen.  Instead of generating energy by splitting or “fissioning” heavy atoms, the goal is to get these light elements to “fuse,” releasing massive amounts of energy.

    Actually, the beams don’t hit the target itself.  Instead they’re focused through two holes in the ends of a tiny cylinder, known as a hohlraum, that holds a “capsule” of fuel material mounted in its center.  It’s what’s known as the indirect drive approach to ICF, as opposed to direct drive, in which the beams are focused directly on a target containing the fuel material.  When the beams hit the inside walls of the cylinder they generate a burst of x-rays.  These are what actually illuminate the target, causing it to implode to extremely high densities.  At just the right moment a “hot spot” is created in the very center of this dense, imploded fuel material, where fusion ignition begins.  The fusion reactions create alpha particles, helium nuclei containing two neutrons and two protons, which then smash into the surrounding “cold” fuel material, causing it to ignite as well, resulting in a “burn wave,” which spreads outward, igniting the rest of the fuel.  For this to happen, everything has to be just right.  The most important thing is that the implosion be almost perfectly symmetric, so that the capsule isn’t squished into a “pancake,” or squashed into a “sausage,” but is very nearly spherical at the point of highest density.

    Obviously, everything wasn’t just right in the recently concluded ignition experiments.  There are many potential reasons for this.  Material blowing off the hohlraum walls could expand into the interior in unforeseen ways, intercepting some of the laser light and/or x-rays, resulting in asymmetric illumination of the capsule.  So-called laser/plasma interactions with abstruse names like stimulated Raman scattering, stimulated Brillouin scattering, and two plasmon decay, could be more significant than expected, absorbing laser light so as to prevent symmetric illumination and at the same time generating hot electrons that could potentially preheat the fuel, making it much more difficult to implode and ignite.  There are several other potential failure mechanisms, all of which are extremely difficult to model on even the most powerful computers, especially in all three dimensions.

    LLNL isn’t throwing in the towel, though.  In fact, there are several promising alternatives to indirect drive with cylindrical hohlraums.  One that recently showed promise in experiments on the much smaller OMEGA laser system at the University of Rochester’s Laboratory for Laser Energetics (LLE) is substitution of “rugby ball” shaped targets in place of the “traditional” cylinders.  According to the paper cited in the link, these “exhibit advantages over cylinders, in terms of temperature and of symmetry control of the capsule implosion.”  LLNL could also try hitting the targets with “green” laser light instead of the current “blue.”  The laser light is initially “red,” but is currently doubled and then tripled in frequency by passing it through slabs of a special crystal material, shortening its wavelength to the shorter “blue” wavelength, which is absorbed more efficiently.  However, each time the wavelength is shortened, energy is lost.  If “green” light were used, as much as 4 megajoules of energy could be focused on the target instead of the current maximum of around 1.8.  If “green” is absorbed well enough and doesn’t set off excessive laser/plasma interactions, the additional energy just might be enough to do the trick.  Other possible approaches include direct drive, hitting the fuel containing target directly with the laser beams, and “fast ignitor,” in which a separate laser beam is used to ignite a hot spot on the outside of the “cold,” imploded fuel material instead of relying on the complicated central hot spot approach.

    Regardless of whether ignition is achieved on the NIF or not, it will remain an extremely valuable experimental facility.  The reason?  Even without ignition it can generate extreme material conditions that are relevant to those that exist in exploding nuclear weapons.  As a result, it gives us a significant leg up over other nuclear weapons states in an era of no nuclear testing by enabling us to field experiments relevant to the effects of aging on the weapons in our stockpile, and suggesting ways to insure they remain safe and reliable.  Which brings us to the final topic of this post, the Advanced Hydrodynamic Facility, or AHF.

    The possibility of building such a beast was actively discussed and studied back in the 90’s, but Google it now and you’ll turn up very little.  It would behoove us to start thinking seriously about it again.  In modern nuclear weapons, conventional explosives are used to implode a “pit” of fissile material to supercritical conditions.  The implosion must be highly symmetric or the pit will “fizzle,” failing to produce enough energy to set off the thermonuclear “secondary” of the weapon that produces most of the yield.  The biggest uncertainty we face in maintaining the safety and reliability of our stockpile is the degree to which the possible deterioration of explosives, fusing systems, etc., will impair the implosion of the pit.  Basically, an AHF would be a system of massive particle accelerators capable of generating bursts of hard x-rays, or, alternatively, protons, powerful enough to image the implosion of the fission “pit” of a nuclear weapon at multiple points in time and in three dimensions.  Currently we have facilities such as the Dual-Axis Radiographic Hydrodynamic Test Facility (DARHT) at Los Alamos National Laboratory (LANL), but it only enables us study the implosion of small samples of surrogate pit material.  An AHF would be able to image the implosion of actual pits, with physically similar surrogate materials replacing the fissile material.

    Obviously, such experiments could not be conducted in a conventional laboratory.  Ideally, the facility would be built at a place like the Nevada Test Site (NTS) north of Las Vegas.  Experiments would be fielded underground in the same way that actual nuclear tests were once conducted there.  That would have the advantage of keeping us prepared to conduct actual nuclear tests within a reasonably short time if we should ever be forced to do so, for example, by the resumption of testing by other nuclear powers.  With an AHF we could be virtually certain that the pits of the weapons in our arsenal will work for an indefinite time into the future.  If the pit works, we will also be virtually certain that the secondary will work as well, and the reliability of the weapons in our stockpile will be assured.

    Isn’t the AHF just a weaponeer’s wet dream?  Why is it really necessary?  Mainly because it would remove once and for all any credible argument for the resumption of nuclear testing.  Resumption of testing would certainly increase the nuclear danger to mankind, and, IMHO, is to be avoided at all costs.  Not everyone in the military and weapons communities agrees.  Some are champing at the bit for a resumption of testing.  They argue that our stockpile cannot be a reliable deterrent if we are not even sure if our weapons will still work.  With an AHF, we can be sure.  It’s high time for us to dust off those old studies and give some serious thought to building it.

    ICF

  • “Ethics” in the 21st Century

    Posted on August 2nd, 2015 Helian 5 comments

    According to the banner on its cover, Ethics is currently “celebrating 125 years.”  It describes itself as “an international journal of social, political, and legal philosophy.”  Its contributors consist mainly of a gaggle of earnest academics, all chasing about with metaphysical butterfly nets seeking to capture that most elusive quarry, the “Good.”  None of them seems to have ever heard of a man named Westermarck, who demonstrated shortly after the journal first appeared that their prey was as imaginary as unicorns, or even Darwin, who was well aware of the fact, but was not indelicate enough to spell it out so blatantly.

    The latest issue includes an entry on the “Transmission Principle,” defined in its abstract as follows:

    If you ought to perform a certain act, and some other action is a necessary means for you to perform that act, then you ought to perform that other action as well.

    As usual, the author never explains how you get to the original “ought” to begin with.  In another article entitled “What If I Cannot Make a Difference (and Know It),” the author begins with a cultural artifact that will surely be of interest to future historians:

    We often collectively bring about bad outcomes.  For example, by continuing to buy cheap supermarket meat, many people together sustain factory farming, and the greenhouse gas emissions of millions of individuals together bring about anthropogenic climate change.

    and goes on to note that,

    Intuitively, these bad outcomes are not just a matter of bad luck, but the result of some sort of moral shortcoming.  Yet in many of these situations, none of the individual agents could have made any difference for the better.

    He then demonstrates that, because a equals b, and b equals c, we are still entirely justified in peering down our morally righteous noses at purchasers of cheap meat and emitters of greenhouse gases.  His conclusion in academic-speak:

    I have shown how Act Consequentialists can find fault with some agent in all cases where multiple agents who have modally robust knowledge of all the relevant facts gratuitously bring about collectively suboptimal outcomes, even if the agents individually cannot make any difference for the better due to the uncooperativeness of others.

    The author does not explain the process by which emotions that evolved in a world without cheap supermarket meat have lately acquired the power to prescribe whether buying it is righteous or not.

    It has been suggested by some that trading, the exchange of goods and services, is a defining feature of our species.  In an article entitled “Markets without Symbolic Limits,” the authors conclude that,

    In many cases, we are morally obligated to revise our semiotics in order to allow for greater commodification.  We ought to revise our interpretive schemas whenever the costs of holding that schema are significant, without counterweight benefits.  It is itself morally objectionable to maintain a meaning system that imbues a practice with negative meanings when that practice would save or improve lives, reduce or alleviate suffering, and so on.

    No doubt that very thought occurred to our hunter-gatherer ancestors, enhancing their overall fitness.  The happy result was the preservation of the emotional baggage that gave rise to it to later inform the pages of Ethics magazine.

    In short, “moral progress,” as reflected in the pages of Ethics, depends on studiously ignoring Darwin, averting our eyes from the profane scribblings of Westermarck, pretending that the recent flood of books and articles on the evolutionary origins of morality and the existence of analogs of human morality in many animals are irrelevant, and gratuitously assuming that there really is some “thing” out there for the butterfly nets to catch.  In other words, our “moral progress” has been a progress away from self-understanding.  It saddens me, because I’ve always considered self-understanding a “good.”  Just another one of my whims.

  • Scientific Morality and the Illusion of Progress

    Posted on July 11th, 2015 Helian 4 comments

    British philosophers demonstrated the existence of a “moral sense” early in the 18th century.  We have now crawled through the rubble left in the wake of the Blank Slate debacle and finally arrived once again at a point they had reached more than two centuries ago.  Of course, men like Shaftesbury and Hutcheson thought this “moral sense” had been planted in our consciousness by God.  When Hume arrived on the scene a bit later it became possible to discuss the subject in secular terms.  Along came Darwin to suggest that the existence of this “moral sense” might have developed in the same way as the physical characteristics of our species; via evolution by natural selection.  Finally, a bit less than half a century later, Westermarck put two and two together, pointing out that morality was a subjective emotional phenomenon and, as such, not subject to truth claims.  His great work, The Origin and Development of the Moral Ideas, appeared in 1906.  Then the darkness fell.

    Now, more than a century later, we can once again at least discuss evolved morality without fear of excommunication by the guardians of ideological purity.  However, the guardians are still there, defending a form of secular Puritanism that yields nothing in intolerant piety to the religious Puritans of old.  We must not push the envelope too far, lest we suffer the same fate as Tim Hunt, with his impious “jokes,” or Matt Taylor, with his impious shirt.  We cannot just blurt out, like Westermarck, that good and evil are merely subjective artifacts of human moral emotions, so powerful that they appear as objective things.  We must at least pretend that these “objects” still exist.  In a word, we are in a holding pattern.

    One can actually pin down fairly accurately the extent to which we have recovered since our emergence from the dark age.  We are, give or take, about 15 years pre-Westermarck.  As evidence of this I invite the reader’s attention to a fascinating “textbook” for teachers of secular morality that appeared in 1891.  Entitled Elements of Ethical Science: A Manual for Teaching Secular Morality, by John Ogden, it taught the subject with all the most up-to-date Darwinian bells and whistles.  In an introduction worthy of Sam Harris the author asks the rhetorical question,

    Can pure morality be taught without inculcating religious doctrines, as these are usually interpreted and understood?

    and answers with a firm “Yes!”  He then proceeds to identify the basis for any “pure morality:”

    Man has inherently a moral nature, an innate moral sense or capacity.  This is necessary to moral culture, since, without the nature or capacity, its cultivation were impossible… This moral nature or capacity is what we call Moral Sense.  It is the basis of conscience.  It exists in man inherently, and, when enlightened, cultivated, and improved, it becomes the active conscience itself.  Conscience, therefore, is moral sense plus intelligence.

    The author recognizes the essential role of this Moral Sense as the universal basis of all the many manifestations of human morality, and one without which they could not exist.  It is to the moral sentiments what the sense of touch is to the other senses:

    (The Moral Sense) furnishes the basis or the elements of the moral sentiments and conscience, much in the same manner in which the cognitive facilities furnish the data or elements for thought and reasoning.  It is not a sixth sense, but it is to the moral sentiments what touch is to the other senses, a base on which they are all built or founded; a soil into which they are planted, and from which they grow… All the moral sentiments are, therefore, but the concrete modifications of the moral sense, or the applications of it, in a developed form, to the ordinary duties of life, as a sense of justice, of right and wrong, of obligation, duty, gratitude, love, etc., just as seeing, hearing, tasting and smelling are but modified forms of feeling or touch, the basis of all sense.

    And here, in a manner entirely similar to so many modern proponents of innate morality, Ogden goes off the tracks.  Like them, he cannot let go of the illusion of objective morality.  Just as the other senses inform us of the existence of physical things, the moral sense must inform us of the existence of another kind of “thing,” a disembodied, ghostly something that floats about independently of the “sense” that “detects” it, in the form of a pure, absolute truth.  There are numerous paths whereby one may, more or less closely, approach this truth, but they all converge on the same, universal thing-in-itself:

    …it must be conceded that, while we have a body of incontestable truth, constituting the basis of all morality, still the opinions of men upon minor points are so diverse as to make a uniform belief in dogmatical principles impossible.  The author maintains that moral truths and moral conduct may be reached from different routes or sources; all converging, it is true, to the same point:  and that it savors somewhat of illiberality to insist upon a uniform belief in the means or doctrines whereby we are to arrive at a perfect knowledge of the truth, in a human sense.

    The means by which this “absolute truth” acquires the normative power to dictate “oughts” to all and sundry is described in terms just as fuzzy as those used by the moral pontificators of our own day, as if it were ungenerous to even ask the question:

    When man’s ideas of right and wrong are duly formulated, recognized and accepted, they constitute what we denominate MORAL LAW.  The moral law now becomes a standard by which to determine the quality of human actions, and a moral obligation demanding obedience to its mandates.  The truth of this proposition needs no further confirmation.

    As they say in the academy to supply missing steps in otherwise elegant proofs, it’s “intuitively obvious to the casual observer.”  In those more enlightened times, only fifteen years elapsed before Westermarck demolished Ogden’s ephemeral thing-in-itself, pointing out that it couldn’t be confirmed because it didn’t exist, and was therefore not subject to truth claims.  I doubt that we’ll be able to recover the same lost ground so quickly in our own day.  Secular piety reigns in the academy, in some cases to a degree that would make the Puritans of old look like abandoned debauchees, and is hardly absent elsewhere.  Savage punishment is meted out to those who deviate from moral purity, whether flippant Nobel Prize winners or overly principled owners of small town bakeries.  Absent objective morality, the advocates of such treatment would lose their odor of sanctity and become recognizable as mere absurd bullies.  Without a satisfying sense of moral rectitude, bullying wouldn’t be nearly as much fun.  It follows that the illusion will probably persist a great deal longer than a decade and a half this time around.

    Be that as it may, Westermarck still had it right.  The “moral sense” exists because it evolved.  Failing this basis, morality as we know it could not exist.  It follows that there is no such thing as moral truth, or any way in which the moral emotions of one individual can gain a legitimate power to dictate rules of behavior to some other individual.  Until we find our way back to that rather elementary level of self-understanding, it will be impossible for us to deal rationally with our own moral behavior.  We’ll simply have to leave it on automatic pilot, and indulge ourselves in the counter-intuitive hope that it will serve our species just as well now as it did in the vastly different environment in which it evolved.