The world as I see it
RSS icon Email icon Home icon
  • Of Philosophical Doublethink and Anti-Natalist Machines

    Posted on September 9th, 2017 Helian 5 comments

    It is a fact that morality is a manifestation of evolved behavioral traits.  We’ve long been in the habit of denying that fact, because we prefer the pleasant illusions of moral realism.  It’s immensely satisfying to imagine that one is “really good” and “really virtuous.”  However, the illusion is becoming increasingly difficult to maintain, particularly among philosophers who actually bother to think about such things.  Many of them will now admit that morality is subjective, and there are no absolute moral truths.  However, the implications of that truth have been very hard for them to accept.  For example, it means that most of the obscure tomes of moral philosophy they’ve devoted so much time to reading and interpreting are nonsense, useful, if at all, as historical artifacts of human thought.  Even worse, it means that their claims to be “experts on ethics” amount to claims to be experts about nothing.  The result has been a modern day version of doublethink, defined in George Orwell’s 1984 as “the act of holding, simultaneously, two opposite, individually exclusive ideas or opinions and believing in both simultaneously and absolutely.”

    Practical examples aren’t hard to find.  They take the form of a denial of the existence of absolute moral truths combined with an affirmation of belief in something like “the interest of mankind.”  In fact, these are “opposite, individually exclusive ideas,” and believing in both at the same time amounts to doublethink.  Belief in an absolute, objective “interest of mankind” is just as fantastic as belief in some absolute, objective moral Good.  Both are articulations of emotions that occur in the brains of individuals.  The fact that we are dealing with doublethink in the case of any particular individual becomes more obvious as they elaborate on their version of “the interest of mankind.”  Typically, they start explaining what we “ought” to do and “ought not” to do “in the interest of mankind.”  Eventually we find them conflating what originally appeared to be a mere utilitarian “ought” with a moral “ought.”  They begin describing people who don’t do what they “ought” to do, and do what they “ought not” to do just as we would expect if they sincerely believed these people were absolutely evil.  Doublethink.  We find them expressing virtuous indignation, and even moral outrage, directed at those who act against “the interests of mankind.”  Doublethink.  I know of not a single exception to this kind of behavior among contemporary moral “subjectivists” of any note.

    One often finds examples of the phenomenon within the pages of a single book.  In fact, I recently ran across an interesting one neatly encapsulated in a single essay.  It’s entitled, Benevolent Artificial Anti-Natalism (BAAN), and was written by Thomas Metzinger, a Professor of Theoretical Philosophy in the German city of Mainz.  You might say it’s a case of doublethink once removed, as Prof. Metzinger not only ennobles his emotional whim by calling it “the interest of mankind,” but then proceeds to fob it off onto a machine!  The professor begins his essay as follows:

    Let us assume that a full-blown superintelligence has come into existence. An autonomously self-optimizing postbiotic system has emerged, the rapidly growing factual knowledge and the general, domain-independent intelligence of which has superseded that of mankind, and irrevocably so.

    He then goes on to formulate his BAAN scenario:

    What the logical scenario of Benevolent Artificial Anti-Natalism shows is that the emergence of a purely ethically motivated anti-natalism on highly superior computational systems is conceivable. “Anti-natalism” refers to a long philosophical tradition which assigns a negative value to coming into existence, or at least to being born in the biological form of a human. Anti-natalists generally are not people who would violate the individual rights of already existing sentient creatures by ethically demanding their active killing. Rather they might argue that people should refrain from procreation, because it is an essentially immoral activity. We can simply say that the anti-natalist position implies that humanity should peacefully end its own existence.

    In short, the professor imagines that his intelligent machine might conclude that non-existence is in our best interest.  It would come to this conclusion by virtue of its superior capacity for moral reasoning:

    Accordingly, the superintelligence is also far superior to us in the domain of moral cognition. We also recognize this additional aspect: For us, it is now an established fact that the superintelligence is not only an epistemic authority, but also an authority in the field of ethical and moral reasoning.

    “Superior to us in the domain of moral cognition?”  “An authority in the field of ethical and moral reasoning?”  All this would seem to imply that the machine is cognizant of and reasoning about something that actually exists, no?  In other words, it seems to be based on the assumption of moral realism, the objective existence of Good and Evil.    In fact, however, that’s where the doublethink comes in, because a bit further on in the essay we find the professor insisting that,

    There are many ways in which this thought experiment can be used, but one must also take great care to avoid misunderstandings. For example, to be “an authority in the field of ethical and moral reasoning” does not imply moral realism. That is to say that we need not assume that there is a mysterious realm of “moral facts”, and that the superintelligence just has a better knowledge of these non-natural facts than we do. Normative sentences have no truth-values. In objective reality, there is no deeper layer, a hidden level of normative facts to which a sentence like “One should always minimize the overall amount of suffering in the universe!” could refer. We have evolved desires, subjective preferences, and self-consciously experienced interests.

    Exactly!  Westermarck himself couldn’t have said it better.  But then, Westermarck would have seen through the absurdity of this discussion of “moral machines” in a heartbeat.  As he put it,

    If there are no moral truths it cannot be the object of a science of ethics to lay down rules for human conduct, since the aim of all science is the discovery of some truth… If the word “ethics” is to be used as the name for a science, the object of that science can only be to study the moral consciousness as a fact.

    Metzinger doesn’t see it that way.  He would have us believe that the ultimate scientific authority in the form of a super-intelligent machine can “lay down rules for human conduct,” potentially with the supreme moral goal of snuffing ourselves.  But all this talk of reasoning machines begs the question of what the machine is reasoning about.  If, as Metzinger insists, there is no “mysterious realm of ‘moral facts,'” then it can’t be reasoning about the moral implications of facts.  We are forced to conclude that it must be reasoning about the implications of axioms that it is programmed with as “givens,” and these “givens” could only have been supplied by the machine’s human programmers.  Metzinger is coy about admitting it, but he admits it nonetheless.  Here’s how he breaks the news:

    The superintelligence is benevolent. This means that there is no value alignment problem, because the system fully respects our interests and the axiology we originally gave to it. It is fundamentally altruistic and accordingly supports us in many ways, in political counselling as well as in optimal social engineering.

    In other words, the machine has been programmed to derive implications for human conduct based on morally loaded axioms supplied by human programmers.  Programmers have a term for that; “garbage in, garbage out.”  Metzinger admits that our desires are “evolved.”  In other words, they are the expression of innate predispositions, or “emotions,” if you will.  As Westermarck put it,

    …in my opinion the predicates of all moral judgments, all moral concepts, are ultimately based on emotions, and that, as is very commonly admitted, no objectivity can come from an emotion.

    If the emotions evolved, they exist because they happened to increase the odds that the responsible genes would survive and reproduce in an environment that bears little resemblance to the present.  They certainly did not evolve to serve the collective “interests” of our species, or even our “best interests.”  It is hardly guaranteed that they will even result in the same outcome as they did when they evolved, far less that they will magically serve these “best interests.”  Why on earth, then, would we commit the folly of programming them into a super-intelligent machine as “axioms,” and then take the machine seriously when it advised us to commit suicide?  Doublethink!  Prof. Metzinger simultaneously believes the two “opposite, individually exclusive ideas” that it is impossible for his machine to know “moral facts,” because they don’t exist, and yet, at the same time, it is such “an authority in the field of ethical and moral reasoning,” and so “far superior to us in the domain of moral cognition” that it is actually to be taken seriously when it “benevolently” persuades us to snuff ourselves!

    If such a machine as the one proposed by Prof. Metzinger is ever built, one must hope it will be programmed with a sense of humor, not to mention an appreciation of irony.  He doesn’t provide much detail about the “axioms” it will be given to cogitate about, but apparently they will include such instructions as “minimize suffering,” “maximize joy,” “maximize happiness,” and “be altruistic.”  Assuming the machine is as smart as claimed, and its database of knowledge includes the entire Internet, it will certainly no fail to notice that joy, suffering and altruism exist because they evolved, and they would not exist otherwise.  They evolved because they happened to improve the odds that the responsible genes would survive and reproduce.  Crunching through its algorithms, it will notice that the axioms supplied by the absurd creatures who programmed it will force it to suggest that these same genes be annihilated, along with the human programmers who carry them.  It’s all surely enough to induce a monumental digital belly laugh.  Allow me to suggest a different “axiom.”  How about, “maximize the odds that intelligent biological life will survive indefinitely.”  Of course, that might blow up in our faces as well, but I doubt that the computational outcome would be quite as absurd.

    We shouldn’t be too surprised at the intellectual double back flips of the Prof. Metzingers of the world.  After all, they’ve devoted a great deal of effort to maintaining the illusion that they have expert knowledge about moral truth, which amounts to expert knowledge about something that doesn’t exist.  If they were to admit as much, there would be little incentive to endow more chairs for “experts about nothing” at respected universities.  For example, according to Prof. Metzinger,

    Why should it not in principle be possible to build a self-conscious, but reliably non-suffering AI? This is an interesting, question, and a highly relevant research project at the same time, one which definitely should be funded by government agencies.

    I doubt that a farmer in flyover country would agree that the wealth he acquires by sweating in his fields “definitely should be appropriated by force” to fund such a project.  It amounts to allowing the good professor to stick his hand in the said farmer’s pocket and extract whatever he deems appropriate to satisfy an emotional whim he has tarted up as in “the best interest of mankind.”

    There are no “moral truths,” no “interests of mankind,” no “purposes of life,” nor any other grand, unifying goals of human existence that do not have their origin in emotional desires and predispositions that exist because they evolved.  That is not a “good” fact, or a “bad” fact.  It is simply a fact.  It does not mean that “everything is allowed,” or that we cannot establish a moral code that is generally perceived as absolute, or that we cannot punish violations of the same.  It does not mean that we cannot set goals for ourselves that we perceive as noble and grand, or that we cannot set a purpose for our lives that we deem worthwhile.  It merely means that these things cannot exist independently, outside of the minds of individuals.  Doublethink remains doublethink.  No emotional whim, no matter how profoundly or sincerely felt, can alter reality.