Comments on: Self-Blackmail /self-blackmail/ More patient than death. Sun, 10 Oct 2021 00:37:59 +0000 hourly 1 https://wordpress.org/?v=5.9.2 By: Ziz /self-blackmail/#comment-5401 Sun, 10 Oct 2021 00:37:59 +0000 /?p=9#comment-5401

Then I was cursed to be making hard decisions all the time.

Cursed with free will.
And it’s awesome.

]]>
By: alice monday /self-blackmail/#comment-81 Sun, 01 Jul 2018 01:01:30 +0000 /?p=9#comment-81 In reply to Daniel Reeves.

the near self *can* do that.

the agents are partially distinct and partially adversarial, so the same dynamics show up.

]]>
By: yeah sure, a monkey – fix these important bugs /self-blackmail/#comment-52 Sat, 04 Mar 2017 06:02:01 +0000 /?p=9#comment-52 […] It can be viewed as a game between the official personality – let’s call him Dr. Mockito – and the Mr. Hyde character. In the case of Beeminder the game is not a trade – it has more of the structure of blackmail. […]

]]>
By: my chat with the beeminder guy – fix these important bugs /self-blackmail/#comment-49 Sat, 11 Feb 2017 19:08:03 +0000 /?p=9#comment-49 […] DR: That reminds me of another forum discussion sparked by another blog post [Sinceriously]. […]

]]>
By: Romeo Stevens /self-blackmail/#comment-13 Tue, 20 Dec 2016 07:28:45 +0000 /?p=9#comment-13 >It’s better to limp in the right direction than run in the wrong one.

ahh, yes. This is good.

]]>
By: Daniel Reeves /self-blackmail/#comment-8 Sat, 17 Dec 2016 01:55:48 +0000 /?p=9#comment-8 In reply to Admin.

Good point about the false dichotomy. I’m inclined, though, to double down in favor of the far self. I think the theory on time inconsistency and hyperbolic discounting as well as my experience (I should confess here my highly biased sample of people — those who like Beeminder) support the view that it’s the far self who can make rational tradeoffs. The far self can appreciate the deliciousness of desserts or the agony (apparently, for some people) of grad school and trade it off against health and career prospects. The near self simply disregards non-immediate consequences.

I see that it’s not necessarily that way and there exist people who pathologically hoard their money or feel too guilty to ever have leisure time. I view that as rare hypercorrection. I hate arguments that include the words “most people” but that’s what my argument boils down to. The common case is akrasia, impetuousness, procrastination. Acting blatantly irrationally for failure to think outside of narrow timeslices.

Crap, I’ve fallen into the dichotomy again. But here’s my point: the far self has an inherent advantage. It can read about the case of the miserable grad student and heed the warning and adjust the tradeoffs and approximate the ideal wholistic self who’s incorporating the knowledge at every timeslice. The near self can’t do that.

I think you’ve nicely articulated the theoretical ideal, getting all the timeslices in harmony. Have the near self always keep in focus the long-term goals so that every moment-to-moment decision incorporates the right tradeoffs. I think that’s what Nate Soares is also advocating in his blog series on Replacing Guilt — http://mindingourway.com/guilt/ — but I’ve failed to translate that into concrete steps. Beeminder is theoretically sub-optimal but extremely concrete.

I also like thinking of Beeminder as an insurance policy. Use subtler techniques to make Beeminder unnecessary but also have the Beeminder graph. Use it mainly to quantify and visualize your progress but also to enforce a bare minimum. It’s often easy to pick a minimum rate of progress on a goal such that IF you fell below it then the only possible explanation would be that your timeslice harmony techniques failed. In that hypothetical world, resorting to self-blackmail is better than falling on your face. By all means though, try to stay well above the minimum such that Beeminder’s commitment device is moot.

Finally, your game-theoretic use of the term “blackmail”: If there were literally 2 distinct agents in a noncooperative game then I think you’d be right. Don’t cave to the blackmail, pay the penalty. Because then your adversary loses the incentive to continue to blackmail you. So my objection is that the agents aren’t distinct or adversarial. For the most part I just do what Beeminder tells me to when it tells me to do it and I feel great about that, even in the moment!

]]>
By: Admin /self-blackmail/#comment-7 Thu, 15 Dec 2016 04:46:29 +0000 /?p=9#comment-7 In reply to Daniel Reeves.

Hi Danny,

I read the expanded comment and thread.

I appreciate the depth you go to in engaging with contrary ideas like mine. I saw your remark about danger of misinterpretation, and want to reassure you that what I’ve seen so far from you has moved me in a positive direction from my pessimistic priors about people rationalizing.

I know that some people are blackmailing themselves into doing things it is better for them to do. (I hesitate to say “they want to do the things”, because it’s actually more like parts of them sometimes want to do the things, and other times other parts of them don’t.) Many of them would have a bad time if they were thrown from that local optimum.

I want to highlight what I think is a false dichotomy, between people whose far selves are rational and whose immediate selves are not, and people whose immediate selves are rational and whose far selves are not.

I think all timeslices of a person, and all little sub-processes running and vying for control, often have bits of knowledge that others lack. They are all sort of compromised. In order to act as smart as possible, decisions need to be based on all that knowledge. It’s not optimal to just be really careful as one class of timeslices.

I’m glad to see users in that thread talking about gathering data and negotiation processes. My own process relies heavily on the acting timeslice trusting other timeslices. Other timeslices have a sort of spirit of pacifism and collaboration. This works well because future timeslices remember it.

Many pieces of knowledge are hard to verbalize. Even as the timeslice that has a sub-process that has that knowledge. Some motives are probably set up to be unconscious. (http://sideways-view.com/2016/11/26/if-you-cant-lie-to-others-you-must-lie-to-yourself/) Other pieces of the “spirit” I rely on to deal with this are a deep acceptance of my actual values whatever they may be as the things I want determining my decisions, and the same sort of respect of autonomy as I’d have for other people.

It’s an immensely powerful system. And internalizing the side-channel effects of choices and how you make them that work through your thoughts being read (remembered) or your workings being predicted by someone who knows a lot about you (you) seems to be one of the main things that feeds into it. In the opposite direction of most people I’ve met who talk about using decision theory to avert akrasia.

I also want to clarify what I mean by practicing being blackmailed. By “blackmail” I’m referencing a particular class of games-as-in-game-theory. I’m not just picking the word for its low karma (http://slatestarcodex.com/2014/11/04/ethnic-tension-and-meaningless-arguments/). You can predict what your future self will do. You set up the commitment device because you predict future self will capitulate. For this to work, you need to have the part of you that is in control “in the moment” fail at a central case of what decision theory says it should do.

]]>
By: Tracy W /self-blackmail/#comment-6 Mon, 12 Dec 2016 07:24:18 +0000 /?p=9#comment-6 I think people use Beeminder differently for different purposes. I use Beeminder to do things like 2 minutes of exercises a day to keep my back from seizing up (advised by my physio.) That seems pretty far from the superego.

]]>
By: Daniel Reeves /self-blackmail/#comment-4 Sat, 10 Dec 2016 03:33:31 +0000 /?p=9#comment-4 PS: I expanded my comment here: http://forum.beeminder.com/t/rebuttal-to-sinceriouslys-self-blackmail/2784

]]>
By: Daniel Reeves /self-blackmail/#comment-3 Sat, 10 Dec 2016 02:28:12 +0000 /?p=9#comment-3 I like your idea of a self-fulfillingly self-enforcing commitment file! And the cautionary tales about committing to the wrong things are important. Generally my contention is that the version of you making decisions in the face of immediate consequences is a fundamentally compromised version of you. There are people for whom this isn’t true. And those seem to be the people you’re talking about. Like the person who can’t see how awful grad school is from a distance; they only see it from in the trenches.

(Let me pause here with a reminder that we can both be right: http://slatestarcodex.com/2014/03/24/should-you-reverse-any-advice-you-hear/ )

But I don’t buy the argument that commitment devices mean teaching yourself to be blackmailed. You can call it blackmailing yourself or simply arranging your future incentives. Or simply hard-committing to something you’re certain you want to follow through on.

Which is probably the crux of it: Be really dang sure that the thing you’re committing to is something you really want. We harp on this a lot in Beeminderland. Like the Want-Can-Will test — http://blog.beeminder.com/wantcanwill with the first question “How certain are you that *want* to do this?”.

I should also mention that Beeminder has what I think is a clever way to minimize the consequences of being wrong about what you really want. We lay it all out in our article on Flexible Self-Control — http://blog.beeminder.com/flexbind/ — but the idea is that you’re only ever committed for the upcoming week. You can change your commitment and the changes take effect a week from now.

But fundamental to all this is my assumption that you can make rational decisions at a distance. I think it’s good to question this and my suspicion is that there are distinct personalities for whom it’s true and not true.

Danny of Beeminder

]]>