The world as I see it
RSS icon Email icon Home icon
  • Of Philosophical Doublethink and Anti-Natalist Machines

    Posted on September 9th, 2017 Helian 5 comments

    It is a fact that morality is a manifestation of evolved behavioral traits.  We’ve long been in the habit of denying that fact, because we prefer the pleasant illusions of moral realism.  It’s immensely satisfying to imagine that one is “really good” and “really virtuous.”  However, the illusion is becoming increasingly difficult to maintain, particularly among philosophers who actually bother to think about such things.  Many of them will now admit that morality is subjective, and there are no absolute moral truths.  However, the implications of that truth have been very hard for them to accept.  For example, it means that most of the obscure tomes of moral philosophy they’ve devoted so much time to reading and interpreting are nonsense, useful, if at all, as historical artifacts of human thought.  Even worse, it means that their claims to be “experts on ethics” amount to claims to be experts about nothing.  The result has been a modern day version of doublethink, defined in George Orwell’s 1984 as “the act of holding, simultaneously, two opposite, individually exclusive ideas or opinions and believing in both simultaneously and absolutely.”

    Practical examples aren’t hard to find.  They take the form of a denial of the existence of absolute moral truths combined with an affirmation of belief in something like “the interest of mankind.”  In fact, these are “opposite, individually exclusive ideas,” and believing in both at the same time amounts to doublethink.  Belief in an absolute, objective “interest of mankind” is just as fantastic as belief in some absolute, objective moral Good.  Both are articulations of emotions that occur in the brains of individuals.  The fact that we are dealing with doublethink in the case of any particular individual becomes more obvious as they elaborate on their version of “the interest of mankind.”  Typically, they start explaining what we “ought” to do and “ought not” to do “in the interest of mankind.”  Eventually we find them conflating what originally appeared to be a mere utilitarian “ought” with a moral “ought.”  They begin describing people who don’t do what they “ought” to do, and do what they “ought not” to do just as we would expect if they sincerely believed these people were absolutely evil.  Doublethink.  We find them expressing virtuous indignation, and even moral outrage, directed at those who act against “the interests of mankind.”  Doublethink.  I know of not a single exception to this kind of behavior among contemporary moral “subjectivists” of any note.

    One often finds examples of the phenomenon within the pages of a single book.  In fact, I recently ran across an interesting one neatly encapsulated in a single essay.  It’s entitled, Benevolent Artificial Anti-Natalism (BAAN), and was written by Thomas Metzinger, a Professor of Theoretical Philosophy in the German city of Mainz.  You might say it’s a case of doublethink once removed, as Prof. Metzinger not only ennobles his emotional whim by calling it “the interest of mankind,” but then proceeds to fob it off onto a machine!  The professor begins his essay as follows:

    Let us assume that a full-blown superintelligence has come into existence. An autonomously self-optimizing postbiotic system has emerged, the rapidly growing factual knowledge and the general, domain-independent intelligence of which has superseded that of mankind, and irrevocably so.

    He then goes on to formulate his BAAN scenario:

    What the logical scenario of Benevolent Artificial Anti-Natalism shows is that the emergence of a purely ethically motivated anti-natalism on highly superior computational systems is conceivable. “Anti-natalism” refers to a long philosophical tradition which assigns a negative value to coming into existence, or at least to being born in the biological form of a human. Anti-natalists generally are not people who would violate the individual rights of already existing sentient creatures by ethically demanding their active killing. Rather they might argue that people should refrain from procreation, because it is an essentially immoral activity. We can simply say that the anti-natalist position implies that humanity should peacefully end its own existence.

    In short, the professor imagines that his intelligent machine might conclude that non-existence is in our best interest.  It would come to this conclusion by virtue of its superior capacity for moral reasoning:

    Accordingly, the superintelligence is also far superior to us in the domain of moral cognition. We also recognize this additional aspect: For us, it is now an established fact that the superintelligence is not only an epistemic authority, but also an authority in the field of ethical and moral reasoning.

    “Superior to us in the domain of moral cognition?”  “An authority in the field of ethical and moral reasoning?”  All this would seem to imply that the machine is cognizant of and reasoning about something that actually exists, no?  In other words, it seems to be based on the assumption of moral realism, the objective existence of Good and Evil.    In fact, however, that’s where the doublethink comes in, because a bit further on in the essay we find the professor insisting that,

    There are many ways in which this thought experiment can be used, but one must also take great care to avoid misunderstandings. For example, to be “an authority in the field of ethical and moral reasoning” does not imply moral realism. That is to say that we need not assume that there is a mysterious realm of “moral facts”, and that the superintelligence just has a better knowledge of these non-natural facts than we do. Normative sentences have no truth-values. In objective reality, there is no deeper layer, a hidden level of normative facts to which a sentence like “One should always minimize the overall amount of suffering in the universe!” could refer. We have evolved desires, subjective preferences, and self-consciously experienced interests.

    Exactly!  Westermarck himself couldn’t have said it better.  But then, Westermarck would have seen through the absurdity of this discussion of “moral machines” in a heartbeat.  As he put it,

    If there are no moral truths it cannot be the object of a science of ethics to lay down rules for human conduct, since the aim of all science is the discovery of some truth… If the word “ethics” is to be used as the name for a science, the object of that science can only be to study the moral consciousness as a fact.

    Metzinger doesn’t see it that way.  He would have us believe that the ultimate scientific authority in the form of a super-intelligent machine can “lay down rules for human conduct,” potentially with the supreme moral goal of snuffing ourselves.  But all this talk of reasoning machines begs the question of what the machine is reasoning about.  If, as Metzinger insists, there is no “mysterious realm of ‘moral facts,'” then it can’t be reasoning about the moral implications of facts.  We are forced to conclude that it must be reasoning about the implications of axioms that it is programmed with as “givens,” and these “givens” could only have been supplied by the machine’s human programmers.  Metzinger is coy about admitting it, but he admits it nonetheless.  Here’s how he breaks the news:

    The superintelligence is benevolent. This means that there is no value alignment problem, because the system fully respects our interests and the axiology we originally gave to it. It is fundamentally altruistic and accordingly supports us in many ways, in political counselling as well as in optimal social engineering.

    In other words, the machine has been programmed to derive implications for human conduct based on morally loaded axioms supplied by human programmers.  Programmers have a term for that; “garbage in, garbage out.”  Metzinger admits that our desires are “evolved.”  In other words, they are the expression of innate predispositions, or “emotions,” if you will.  As Westermarck put it,

    …in my opinion the predicates of all moral judgments, all moral concepts, are ultimately based on emotions, and that, as is very commonly admitted, no objectivity can come from an emotion.

    If the emotions evolved, they exist because they happened to increase the odds that the responsible genes would survive and reproduce in an environment that bears little resemblance to the present.  They certainly did not evolve to serve the collective “interests” of our species, or even our “best interests.”  It is hardly guaranteed that they will even result in the same outcome as they did when they evolved, far less that they will magically serve these “best interests.”  Why on earth, then, would we commit the folly of programming them into a super-intelligent machine as “axioms,” and then take the machine seriously when it advised us to commit suicide?  Doublethink!  Prof. Metzinger simultaneously believes the two “opposite, individually exclusive ideas” that it is impossible for his machine to know “moral facts,” because they don’t exist, and yet, at the same time, it is such “an authority in the field of ethical and moral reasoning,” and so “far superior to us in the domain of moral cognition” that it is actually to be taken seriously when it “benevolently” persuades us to snuff ourselves!

    If such a machine as the one proposed by Prof. Metzinger is ever built, one must hope it will be programmed with a sense of humor, not to mention an appreciation of irony.  He doesn’t provide much detail about the “axioms” it will be given to cogitate about, but apparently they will include such instructions as “minimize suffering,” “maximize joy,” “maximize happiness,” and “be altruistic.”  Assuming the machine is as smart as claimed, and its database of knowledge includes the entire Internet, it will certainly no fail to notice that joy, suffering and altruism exist because they evolved, and they would not exist otherwise.  They evolved because they happened to improve the odds that the responsible genes would survive and reproduce.  Crunching through its algorithms, it will notice that the axioms supplied by the absurd creatures who programmed it will force it to suggest that these same genes be annihilated, along with the human programmers who carry them.  It’s all surely enough to induce a monumental digital belly laugh.  Allow me to suggest a different “axiom.”  How about, “maximize the odds that intelligent biological life will survive indefinitely.”  Of course, that might blow up in our faces as well, but I doubt that the computational outcome would be quite as absurd.

    We shouldn’t be too surprised at the intellectual double back flips of the Prof. Metzingers of the world.  After all, they’ve devoted a great deal of effort to maintaining the illusion that they have expert knowledge about moral truth, which amounts to expert knowledge about something that doesn’t exist.  If they were to admit as much, there would be little incentive to endow more chairs for “experts about nothing” at respected universities.  For example, according to Prof. Metzinger,

    Why should it not in principle be possible to build a self-conscious, but reliably non-suffering AI? This is an interesting, question, and a highly relevant research project at the same time, one which definitely should be funded by government agencies.

    I doubt that a farmer in flyover country would agree that the wealth he acquires by sweating in his fields “definitely should be appropriated by force” to fund such a project.  It amounts to allowing the good professor to stick his hand in the said farmer’s pocket and extract whatever he deems appropriate to satisfy an emotional whim he has tarted up as in “the best interest of mankind.”

    There are no “moral truths,” no “interests of mankind,” no “purposes of life,” nor any other grand, unifying goals of human existence that do not have their origin in emotional desires and predispositions that exist because they evolved.  That is not a “good” fact, or a “bad” fact.  It is simply a fact.  It does not mean that “everything is allowed,” or that we cannot establish a moral code that is generally perceived as absolute, or that we cannot punish violations of the same.  It does not mean that we cannot set goals for ourselves that we perceive as noble and grand, or that we cannot set a purpose for our lives that we deem worthwhile.  It merely means that these things cannot exist independently, outside of the minds of individuals.  Doublethink remains doublethink.  No emotional whim, no matter how profoundly or sincerely felt, can alter reality.


    5 responses to “Of Philosophical Doublethink and Anti-Natalist Machines” RSS icon

    • Sunday morning in Aus and as the shufflers pass on their way to church I read your latest post with a simply sense of ‘well this is simply the way it is’. I can’t but help contrast the attendance numbers between the peddlers of illusion and your sober and objective discussions of the position we find ourselves in.
      Why does the human psyche exhibit the need to feed and promote fanciful bizzare and as you point out mutually exclusive positions of logic, to others who seem to have an equally rapacious need to consume such fantasies?
      Maybe I saw an insight into this with recent online discussions with some of our so called rational scientific types in discussions around Islam. There are a few steps but briefly I regularity attack the lefts love affair with islam, admittedly I do this from a materialist/physicalist point of view but be that as it is their responses that highlight today’s issue. When I attack they often respond with some bizzare words that increasingly I see as little more than some mantra of banner behaviour. (Saying one thing and doing another)
      ‘Yes Yes” they say, “we agree with Darwinian science, but even the greatest Darwinian Dawkins has said that we are Humanist, and as such we have a set of commandments (there are even various lists of humanism ‘principles’ which predictably resemble most of the Ten Commandments) (auto correct just gave the Ten Commandments capital letters, its just did it again) and life would be brutal etc if we didn’t have these. What’s more, are you saying that you behave like a wild animal.” Ah, here we have the rub, if you state the truth, as you do, re the actual drivers of human behaviour then people look at you through this strange prism.
      The hypocrisy is ensured to continue as its just noise to cover thier privelidges and sense of ‘righteousness’.
      When I ask them if they support slavery, good grief they are indignant, how could I even ask such a thing, further what a piece of work I am to even mention such idiocy. OK, so when asked if they wear clothes from the sweat shops of Bangladesh, drink coffe picked by child labour under slave condition, of eat chocolate (again picked under slave conditions), well “Oh come on you have to draw the line somewhere, we can’t be expected to know every step of every transaction that leads to our privelidge and extraordinary privileged life.”
      To end, some of us accept that we benefit from the laws of nature and those who went before us, their efforts, genius and luck.
      Could I ask your honour to rule on the following, I charge the religious and the progressives, the defenders of their privileges under the cover of morality, be it religious or humanist as hypocrites and liars?

    • If I were to say that all progressives and the religious were hypocrites and liars, I would be making a moral judgment that is no more legitimate than anyone else’s moral judgment and, like all moral judgments, is merely an expression of the emotions I happen to be feeling at some point in time. Use of such pejorative terms makes it more, not less, difficult to understand their behavior. For that matter, “the religious” and “progressives” are very broad terms. They don’t all behave in the same way, and I doubt that many of them are deliberately hypocrites and liars, and actually conscious of the fact in any case. We are all moral creatures, and make moral judgments all the time because it is our nature to do so. I am no different than anyone else in that regard. However, I realize that blindly allowing moral emotions to influence our behavior can be dangerous and destructive. A theme of this blog is that we should seek to understand our moral behavior in order to find ways to control its most destructive manifestations, instead of simply continuing to stumble blindly down the same path we have always followed.

    • A hypocrite is someone who doesn’t practice what he preaches. You wouldn’t be making a moral judgement by pointing that out about a person

    • I actually read this thought experiment as a dark critique of one potential “solution” to the AI Control Problem: coherent extrapolated volition. CEV imagines us designing an AI that

      would predict what an idealized version of us would want, “if we knew more, thought faster, were more the people we wished we were, had grown up farther together”. It would recursively iterate this prediction for humanity as a whole, and determine the desires which converge. This initial dynamic would be used to generate the AI’s utility function.

      Well, what if our recursively idealised version converges on the notion that existence is a mistake?

      I have similar issues to you with this, it would seem. Would the CEVs of a member of ISIS, a Mormon, and a secular liberal all converge, after however many iterations, on the same values? I find that doubtful: even small changes to logic chains, however complex, can produce wildly different results.

      If there is no one CEV for our species, if it’s result is dependent in any way on the things we value, then it is always going to be an arbitrary choice. Anti-Natalism might be one endpoint, but it would be one we chose, even if unintentionally.

    • There is nothing intrinsically valuable – valuable in itself – about anything any of us desires. Our desires exist without intrinsic purpose and with no intrinsic goal. Their “root cause” exists in the form of a collection of inherent behavioral predispositions that themselves exist because they evolved by natural selection. In other words, the traits that are the basis of all our desires, wants, and whims are there because, at some time when our environment was likely radically different than it is now, they happened to enhance the probability that the responsible genes would survive and reproduce. It is hardly likely that these traits are identical across all human individuals or aggregates of individuals of whatever form. In other words, an absolute value towards which we all “should” be striving simply doesn’t exist. An outcome of evolution is that we all have personal whims and goals that are purely subjective, and have no intrinsic, objective value at all. I am certainly willing to work with others towards goals we may happen to have in common. However, I will always insist that those goals promote, or at least don’t actively work against, the survival of the genes I carry. The opposite type of behavior is what I’ve occasionally referred to in the past as “morality inversions,” when associated with behaviors commonly associated with morality. I oppose such behavior, not because it is intrinsically wrong, but as a matter of personal taste. My, admittedly emotional, response to humans or any other life form that acts in that way is that they are sick and dysfunctional. I prefer to think that I am not sick and dysfunctional as a biological unit, if you will. Therefore, I avoid such behavior myself, and will certainly resist any attempt by others to force it on me. Call it a matter of aesthetics, if you will. I see nothing beautiful or pleasing in diseased organisms.

    Leave a reply