{"id":138,"date":"2017-12-29T22:17:51","date_gmt":"2017-12-29T22:17:51","guid":{"rendered":"http:\/\/sinceriously.fyi\/?p=138"},"modified":"2019-04-13T02:37:12","modified_gmt":"2019-04-13T02:37:12","slug":"neutral-and-evil","status":"publish","type":"post","link":"https:\/\/sinceriously.fyi\/neutral-and-evil\/","title":{"rendered":"Neutral and Evil"},"content":{"rendered":"

What is the good\/neutral\/evil axis of Dungeons and Dragons alignment made of?
\nWe’ve got an idea of what it would mean for an AI to be good-aligned: it wants to make all the good things happen so much, and it does.
\nBut what’s the difference between a neutral AI and an evil AI?
\nIt’s tempting to say that the evil AI is malevolent, rather than just indifferent. And the neutral one is indifferent.
\nBut that doesn’t fit the intuitive idea that the alignment system was supposed to map onto, or what alignment is.<\/p>\n

Imagine a crime boss who makes a living off of the kidnapping and ransoms of random innocents, while posting videos online of the torture and dismemberment of those whose loved ones don’t pay up as encouragement, not because of sadism, but because they wanted money to spend on lots of shiny gold things they like, and are indifferent to human suffering. Evil, right?<\/p>\n

If sufficient indifference can make someone evil, then… If a good AI creates utopia, and an AI that kills everyone and creates paperclips because it values only paperclips is evil, then what is a neutral-aligned AI? What determines the exact middle ground between utopia and everyone being dead?<\/p>\n

Would this hypothetical AI leave everyone alive on Earth and leave us our sun but take the light cone for itself? If it did, then why would it? What set of values is that the best course of action to satisfy?<\/p>\n

I think you’ve got an intuitive idea of what a typical neutral human does. They live in their house with their white picket fence and have kids and grow old, and they don’t go out of their way to right far away wrongs in the world, but if they own a restaurant and the competition down the road starts attracting away their customers, and they are given a tour through the kitchens in the back, and they see a great opportunity to start a fire and disable the smoke detectors that won’t be detected until it’s too late, burning down the building and probably killing the owner, they don’t do it.<\/p>\n

It’s not that a neutral person values the life of their rival more than the additional money they’d make with the competition eliminated, or cares about better serving the populace with a better selection of food in the area. You won’t see them looking for opportunities to spend that much money or less to save anyone’s life.<\/p>\n

And unless most humans are evil (which is as against the intuitive concept the alignment system points at as “neutral = indifference”), it’s not about action\/inaction either. People eat meat. And I’m pretty sure most of them believe that animals have feelings. That’s active harm, probably.<\/p>\n

Wait a minute, did I seriously just base a sweeping conclusion about what alignment means on an obscure piece of possible moral progress beyond the present day? What happened to all my talk about sticking to the intuitive concept?<\/p>\n

Well, I’m not sticking to the intuitive concept. I’m sticking to the real thing the intuitive concept pointed at which gave it its worthiness of attention. I’m trying to improve on the intuitive thing.<\/p>\n

I think that the behavior of neutral is wrapped up in human akrasia and the extent to which people are “capable” of taking ideas seriously. It’s way more complicated than good<\/a>.<\/p>\n

But there’s another ontology, the ontology of “revealed preferences”, where akrasia is about serving an unacknowledged end or under unacknowledged beliefs<\/a>, and is about rational behavior from more computationally bounded subagents, and those are the true values. What does that have to say about this?<\/p>\n

Everything that’s systematic coming out of an agent is because of optimizing, just often optimizing dumbly and disjointedly if it’s kinda broken. So what is the structure of that akrasia? Why do neutral people have all that systematic structure toward not doing “things like” burning down a rival restaurant owner’s life and business, but all that other systematic structure toward not spending their lives saving more lives than that? I enquoted “things like”, because that phrase contains the question. What is the structure of “like burning down a rival restaurant” here?<\/p>\n

My answer: socialization<\/a>, the light side<\/a>, orders<\/a>\u00a0charged with motivational force by the idea of the “dark path” that ultimately results in justice getting them, as drilled into us by all fiction,\u00a0false faces<\/a>\u00a0necessitated by not being coordinated against<\/a>\u00a0on account of the “evil” Schelling point. Fake<\/a> structure in place for coordinating. If you try poking at the structure most people build in their minds around “morality”, you’ll see it’s thoroughly fake, and bent towards coordination which appears to be ultimately for their own benefit. This is why I said that the dark side will turn most people evil. The ability to re-evaluate that structure, now that you’ve become smarter than most around you, will lead to a series of “jailbreaks”. That’s a way of looking at the path of Gervais-sociopathy<\/a>.<\/p>\n

That’s my answer to the question of whether becoming a sociopath makes you evil<\/a>. Yes for most people from a definition of evil that is about individual psychology. No from the perspective of you’re evil if you’re complicit in an evil social structure, because then you probably already were<\/a>, which is a useful perspective for coordinating to enact justice.<\/p>\n

If you’re reading this and this is you, I recommend aiming for lawful evil. Keep a strong focus on still being able to coordinate even though you know that’s what you’re doing.<\/p>\n

An evil person is typically just a neutral person who has become better at optimizing, more like an unfriendly AI, in that they no longer have to believe their own propaganda. That can be either because they’re consciously lying, really good at speaking in multiple levels with plausible deniability and don’t need to fool anyone anymore, or because their puppetmasters have grown smart enough to be able to reap benefits from defection without getting coordinated against without the conscious mind’s help. That is why it makes no sense to imagine a neutral superintelligent AI.<\/p>\n","protected":false},"excerpt":{"rendered":"

What is the good\/neutral\/evil axis of Dungeons and Dragons alignment made of? We’ve got an idea of what it would mean for an AI to be good-aligned: it wants to make all the good things happen so much, and it does. But what’s the difference between a neutral AI and an evil AI? It’s tempting … Continue reading “Neutral and Evil”<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[1],"tags":[],"_links":{"self":[{"href":"https:\/\/sinceriously.fyi\/wp-json\/wp\/v2\/posts\/138"}],"collection":[{"href":"https:\/\/sinceriously.fyi\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/sinceriously.fyi\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/sinceriously.fyi\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/sinceriously.fyi\/wp-json\/wp\/v2\/comments?post=138"}],"version-history":[{"count":4,"href":"https:\/\/sinceriously.fyi\/wp-json\/wp\/v2\/posts\/138\/revisions"}],"predecessor-version":[{"id":946,"href":"https:\/\/sinceriously.fyi\/wp-json\/wp\/v2\/posts\/138\/revisions\/946"}],"wp:attachment":[{"href":"https:\/\/sinceriously.fyi\/wp-json\/wp\/v2\/media?parent=138"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/sinceriously.fyi\/wp-json\/wp\/v2\/categories?post=138"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/sinceriously.fyi\/wp-json\/wp\/v2\/tags?post=138"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}