{"id":27,"date":"2017-01-09T08:00:53","date_gmt":"2017-01-09T08:00:53","guid":{"rendered":"http:\/\/sinceriously.fyi\/?p=27"},"modified":"2017-01-10T08:00:01","modified_gmt":"2017-01-10T08:00:01","slug":"narrative-breadcrumbs-vs-grizzly-bear","status":"publish","type":"post","link":"https:\/\/sinceriously.fyi\/narrative-breadcrumbs-vs-grizzly-bear\/","title":{"rendered":"Narrative Breadcrumbs vs Grizzly Bear"},"content":{"rendered":"
In my experience, to self-modify successfully, it is very very useful to have something like trustworthy sincere intent to optimize for your own values whatever they are.<\/p>\n
If that sounds like it’s the whole problem, don’t worry. I’m gonna try to show you how to build it in pieces. Starting with a limited form, which is something like decision theory<\/a> or consequentialist integrity<\/a>. I’m going to describe it with a focus on actually making it part of your algorithm, not just understanding it.<\/p>\n First, I’ll lay groundwork for the special case of fusion<\/a> required, in the form of how not to do it and how to tell when you’ve done it. Okay, here we go.<\/p>\n Imagine you were being charged by an enraged grizzly bear and you had nowhere to hide or run, and you had a gun. What would you do? Hold that thought.<\/p>\n I once talked to someone convinced one major party presidential candidate was much more likely to start a nuclear war than the other and that was the dominant consideration in voting. Riffing off a headline I’d read without clicking through and hadn’t confirmed, I posed a hypothetical.<\/p>\n What if the better candidate knew you’d cast the deciding vote, and believed that the best way to ensure you voted for them was to help the riskier candidate win the primary in the other major party since you’d never vote for the riskier candidate? What if they’d made this determination after hiring the best people they could to spy on and study you? What if their help caused the riskier candidate to win the primary?<\/p>\n Suppose:<\/p>\n So, when you are choosing who to vote for in the general election:<\/p>\n Sitting there on election night, I answered my own hypothetical: I’d vote for the riskier candidate because it would be\u00a0game-theoretic blackmail. My conversational partner asked how I could put not getting blackmailed over averting nuclear war. They had a point, right? How could I vote the riskier candidate in, knowing they had already won the primary, and whatever this decision theory bullshit motivating me to not capitulate to blackmail was, it had already failed? How could I put my pride in my conception of rationality over winning when the world<\/a>\u00a0hung in the balance?<\/p>\n Think back to what you’d do in the bear situation. Would you say, “how could I put acting in accordance with an understanding of modern technology over not getting mauled to death by a bear”, and use the gun as a club instead of firing it?<\/p>\n Within the above unrealistic assumptions about elections, this is kind of the same thing though.<\/p>\n Acting on understanding of guns propelling bullets is not a goal in and of itself. That wouldn’t be strong enough motive. You probably could not tie your self-respect and identity to “I do the gun-understanding move” so tight that it outweighed actually not being mauled to death by an actual giant bear actually sprinting at you like a small car made of muscle and sharp bits. If you believed guns didn’t really propel bullets, you’d put your virtue and faith in guns aside and do what you could to save yourself by using the allegedly magic stick as a club. Yet you actually believe guns propel bullets, so you could use a gun even in the face of a bear.<\/p>\n Acting with integrity is not a goal in and of itself. That wouldn’t be strong enough motive. You probably could not tie your self-respect and identity to “I do the integritous thing and don’t capitulate to extortion” so tight that it outweighed actually not having our pale blue dot darkened by a nuclear holocaust. If you believed that integrity does not prevent the better candidate from having helped the riskier one win the primary in the first place, you’d put your virtue and faith in integrity aside so you could stop nuclear war by voting for the better candidate and dropping the chance of nuclear war from 10% to 2.5%. You must actually believe integrity collapses timelines, in order to use integrity even in the face of Armageddon.<\/p>\n Another way of saying this is that you need belief that a tool works, not just belief in belief<\/a>.<\/p>\n I suspect it’s a common pattern for people to accept as a job well done an installation of a tool like integrity in their minds when they’ve laid out a trail of yummy narrative breadcrumbs along the forest floor in the path they’re supposed to take. But when a bear is chasing you, you ignore the breadcrumbs and take what you believe to be the path to safety. The motive to take a path needs to flow from the motive to escape the bear. Only then can the motive to follow a path grow in proportion to what’s at stake. Only then will the path be used in high stakes where breadcrumbs are ignored. The way to make that flow happen is to actually believe that path is best in a way so that no breadcrumbs are necessary.<\/p>\n I think this is possible for something like decision theory \/ integrity as well. But what makes me think this is possible, that you don’t have to settle for narrative breadcrumbs? That the part of you that’s in control can understand their power?<\/p>\n How do you know a gun will work? You weren’t born with that knowledge, but it’s made its way into the stuff that’s really in control somehow. By what process?<\/p>\n Well, you’ve seen lots of guns being fired in movies and stuff. You are familiar with the results. And while you were watching them, you knew that unlike lightsabers, guns were real. You’ve also probably seen some results of guns being used in news, history…<\/p>\n But if that’s what it takes, we’re in trouble. Because if there are counterintuitive abstract principles that you never get to see compelling visceral demonstrations of, or maybe even any demonstrations until it’s too late, then you’ll not be able to act on them in life or death circumstances. And I happen to think that there are a few of these.<\/p>\n I still think you can do better.<\/p>\n If you had no gun, and you were sitting in a car with the doors and roof torn off, and that bear was coming, and littering the floor of the car were small cardboard boxes with numbers inked on them, 1 through 100, on the dashboard a note that said, “the key is in the box whose number is the product of 13 and 5”, would you have to win a battle of willpower to check box 65 first? (You might have to win a battle of doing arithmetic quickly, but that’s different.)<\/p>\n If you find the Monty Hall problem counterintuitive, then can you come up with a grizzly bear test for that? I bet most people who are confident in System 2 but not in System 1<\/a>\u00a0that you win more by switching would switch when faced with a charging bear. It might be a good exercise to come up with the vivid details for this test. Make sure to include certainty that an unchosen bad door is revealed whether or not the first chosen door is good.<\/p>\n I don’t think that it’d be a heroic battle of willpower for such people to switch in the Monty Hall bear problem. I think that in this case System 1 knows System 2 is trustworthy and serving the person’s values in a way it can’t see instead of serving an artifact<\/a>, and lets it do its job. I’m pretty sure that’s a thing that System 1 is able to do. Even if it doesn’t feel intuitive, I don’t think this way of buying into a form of reasoning breaks down under high pressure like narrative breadcrumbs do. I’d guess its main weakness relative to full System 1 grokking is that System 1 can’t help as much to find places to apply the tool with pattern-matching.<\/p>\n Okay. Here’s the test that matters:<\/p>\n Imagine that the emperor, Evil Paul Ekman<\/a> loves watching his pet bear chase down fleeing humans and kill them. He has captured you for this purpose and taken you to a forest outside a tower he looks down from. You cannot outrun the bear, but you hold 25% probability that by dodging around trees you can tire the bear into giving up and then escape. You know that any time someone doesn’t put up a good chase, Evil Emperor Ekman is upset because it messes with his bear’s training regimen. In that case, he’d prefer not to feed them to the bear at all. Seizing on inspiration, you shout, “If you sic your bear on me, I will stand still and bare my throat. You aren’t getting a good chase out of me, your highness.” Emperor Ekman, known to be very good at reading microexpressions (99% accuracy), looks closely at you through his spyglass as you shout, then says: “No you won’t, but FYI if that’d been true I’d’ve let you go. OPEN THE CAGE.” The bear takes off toward you at 30 miles per hour, jaw already red with human blood. This will hurt a lot. What do you do?<\/p>\n What I want you to take away from this post is:<\/p>\n In my experience, to self-modify successfully, it is very very useful to have something like trustworthy sincere intent to optimize for your own values whatever they are. If that sounds like it’s the whole problem, don’t worry. I’m gonna try to show you how to build it in pieces. Starting with a limited form, which … Continue reading “Narrative Breadcrumbs vs Grizzly Bear”<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[1],"tags":[],"_links":{"self":[{"href":"https:\/\/sinceriously.fyi\/wp-json\/wp\/v2\/posts\/27"}],"collection":[{"href":"https:\/\/sinceriously.fyi\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/sinceriously.fyi\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/sinceriously.fyi\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/sinceriously.fyi\/wp-json\/wp\/v2\/comments?post=27"}],"version-history":[{"count":6,"href":"https:\/\/sinceriously.fyi\/wp-json\/wp\/v2\/posts\/27\/revisions"}],"predecessor-version":[{"id":37,"href":"https:\/\/sinceriously.fyi\/wp-json\/wp\/v2\/posts\/27\/revisions\/37"}],"wp:attachment":[{"href":"https:\/\/sinceriously.fyi\/wp-json\/wp\/v2\/media?parent=27"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/sinceriously.fyi\/wp-json\/wp\/v2\/categories?post=27"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/sinceriously.fyi\/wp-json\/wp\/v2\/tags?post=27"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}\n
\n
\n
\n
\n
\n