{"id":143,"date":"2017-12-30T06:13:22","date_gmt":"2017-12-30T06:13:22","guid":{"rendered":"http:\/\/sinceriously.fyi\/?p=143"},"modified":"2021-10-02T21:46:53","modified_gmt":"2021-10-02T21:46:53","slug":"spectral-sight-and-good","status":"publish","type":"post","link":"https:\/\/sinceriously.fyi\/spectral-sight-and-good\/","title":{"rendered":"Spectral Sight and Good"},"content":{"rendered":"
Epistemic status update: This model is importantly flawed. Good people are people who have a substantial amount of altruism in their cores<\/a>.<\/p>\n Spectral sight is a collection of abilities allowing the user to see invisible things like the structure of social interactions, institutions, ideologies, politics, and the inner layers of other people’s minds.<\/p>\n I’m describing good and spectral sight together for reasons, because the epistemics locating each concept are interwoven tightly as I’ve constructed them.<\/p>\n A specific type of spectral sight is the one I’ve shown in neutral and evil<\/a>. I’m going to be describing more about that.<\/p>\n This is a skill made of being good at finding out what structure reveals about core. Structure is easy to figure out if you already know it’s Real<\/a>. But often that’s part of the question. Then you have to figure out what it’s a machine for doing, as in what was the still-present thing that installed it\u00a0 and could replace it or override it<\/a> optimizing for?<\/p>\n It’s not a weirdly parochial definition to call this someone’s true values. Because that’s what will build new structure of the old structure stops doing its job. Lots of people “would” sacrifice themselves to save 5 others. And go on woulding until they actually get the opportunity.<\/p>\n There’s a game lots of rationalists have developed different versions of, “Follow the justification”. I have a variant. “Follow the motivational energy.” There’s a limited amount that neutral people will sacrifice for the greater good, before their structures run out of juice and disappear. “Is this belief system \/ whatever still working out for me” is a very simple subagent to silently unconsciously run as puppetmaster.<\/p>\n There’s an even smarter version of that, where fake altruistic structure must be charged with Schelling reach<\/a> in order to work.<\/p>\n Puppetmasters doling out motivational charge to fake structure can include all kinds of other things to make the tails come apart<\/a> between making good happen and appearing to be trying to make good happen in a way that has good results for the person. I suspect that’s a lot of what the “far away”ness thing that the drowning child experiment exposes is made of. Play with variations of that thought experiment, and pay attention to system 1 judgements, not principles, to feel the thing out. What about a portal to the child? What about a very fast train? What if it was one time teleportation? Is there a consistant cross-portal community?<\/p>\n There is biologically fixed structure in the core, the optimizer for which is no longer around to replace it. Some of it is heuristics toward the use of justice<\/a>\u00a0for coordinating for reproducing. Even with what’s baked in, the tails come apart between doing the right thing, and using that perception to accomplish things more useful for reproducing.<\/p>\n My model says neutral people will try to be heroes sometimes. Particularly if that works out for them somehow. If they’re men following high-variance high reward mating strategies, they can be winning even while undergoing significant risk to their lives. That landscape of value can often generate things in the structure class, “virtue ethics”.<\/p>\n Good people seem to have an altruism perpetual motion machine inside them, though, which will persist in moving them through cost in the absence of what would be a reward selfishly.<\/p>\n This about the least intuitive thing to accurately identify in someone by anything but their long-term history. Veganism is one of the most visible and strong correlates. The most important summaries of what people are like, are the best things to lie about. Therefore they require the best adversarial epistemology to figure out. And they are most common to be used in oversimplifying. This does not make them not worth thinking.<\/p>\n If you use spectral sight on someone’s process of figuring out what’s a moral patient, you’re likely to get one of two kinds of responses. One is something like “does my S1 empathize with it”, the other is clique-making behavior<\/a>, typically infused with a PR \/ false-face worthy amount of justice, but not enough to be crazy.<\/p>\n Not knowing this made me taken by surprise the first time I tried to proselytize veganism to a contractarian. How could anyone actually feel like inability to be a part of a social contract really really mattered?<\/p>\n Of course, moral patiency is an abstract concept, far in Schelling reach away from actual actions. And therefore one of the most thoroughly stretched toward lip-service to whatever is considered most good and away from actual action.<\/p>\n “Moral progress” has been mostly a process of Schelling reach extending. That’s why it’s so predictable. (See Jeremy Bentham.)<\/p>\n Thinking about this requires having calibrated quantitative intuitions on the usefulness of different social actions, and of internal actions. There is instrumental value for the purpose of good in clique-building, and there is instrumental value for the purpose of clique-building in appearing good-not-just-clique-building. You have to look at the algorithm, and its role in the person’s entire life, not just the suggestively named tokens, or token behavior.<\/p>\n When someone’s core acts around structure (akrasia), and self-concepts are violated, that’s a good glimpse into who they really are. Good people occasionally do this in the direction of altruism. Especially shortsighted altruism. Especially good people who are trying to build a structure in the class, “consequentialisms”.<\/p>\n Although I have few datapoints, most of which are significantly suspect, good seems quite durable. Because it is in core,\u00a0good people who get\u00a0jailbroken<\/a> remain good. (Think Adrian Veidt for a fictional example. Such characters often get labeled as evil by the internet. Often good as well.) There are tropes reflecting good people’s ability to shrug off circumstances that by all rights should have turned them evil. I don’t know if that relationship to reality is causal.<\/p>\n By good,\u00a0I don’t mean everything people are often thinking when they call someone “good”. That’s because that’s as complicated and nonlocal a concept as justice<\/a>. I’m going for a “understand over incentivize and prescribe behavior” definition here, and therefore insisting that it be a locally-defined concept.<\/p>\n It’s important not to succumb to the halo effect. This is a psychological characteristic. Just because you’re a good person, doesn’t mean you’ll have good consequences. It doesn’t mean you’ll tend to have good consequences. It doesn’t mean you’re not actively a menace. It doesn’t mean you don’t value yourself more than one other person.\u00a0It’s not a status which is given as a reward or taken away for bad behavior, although it predicts against behavior that is truly bad in some sense. Good people can be dangerously defectbot-like. They can be ruthless, they can exploit people, they can develop structure for those things.<\/p>\n If you can’t thoroughly disentangle this from the narrative definition of good person, putting weight in this definition will not be helpful.<\/p>\n","protected":false},"excerpt":{"rendered":" Epistemic status update: This model is importantly flawed. I will not explain why at this time. Just, reduce the overall weight you put in it. (Actually, here.) See also correction. Good people are people who have a substantial amount of altruism in their cores. Spectral sight is a collection of abilities allowing the user to … Continue reading “Spectral Sight and Good”<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[1],"tags":[],"_links":{"self":[{"href":"https:\/\/sinceriously.fyi\/wp-json\/wp\/v2\/posts\/143"}],"collection":[{"href":"https:\/\/sinceriously.fyi\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/sinceriously.fyi\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/sinceriously.fyi\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/sinceriously.fyi\/wp-json\/wp\/v2\/comments?post=143"}],"version-history":[{"count":8,"href":"https:\/\/sinceriously.fyi\/wp-json\/wp\/v2\/posts\/143\/revisions"}],"predecessor-version":[{"id":1215,"href":"https:\/\/sinceriously.fyi\/wp-json\/wp\/v2\/posts\/143\/revisions\/1215"}],"wp:attachment":[{"href":"https:\/\/sinceriously.fyi\/wp-json\/wp\/v2\/media?parent=143"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/sinceriously.fyi\/wp-json\/wp\/v2\/categories?post=143"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/sinceriously.fyi\/wp-json\/wp\/v2\/tags?post=143"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}I will not explain why at this time. Just, reduce the overall weight you put in it<\/del>. (Actually, here<\/a>.) See also correction<\/a>.<\/p>\n