Choices Made Long Ago

I don’t know how mutable core values are. My best guess is, hardly mutable at all or at least hardly mutable predictably.

Any choice you can be presented with, is a choice between some amounts of some things you might value, and some other amounts of things you might value. Amounts as in expected utility.

When you abstract choices this way, it becomes a good approximation to think of all of a person’s choices as being made once timelessly forever. And as out there waiting to be found.

I once broke veganism to eat a cheese sandwich during a series of job interviews, because whoever managed ordering food had fake-complied with my request for vegan food. Because I didn’t want to spend social capital on it, and because I wanted to have energy. It was a very emotional experience. I inwardly recited one of my favorite Worm quotes about consequentialism. Seemingly insignificant; the sandwich was prepared anyway and would have gone to waste, but the way I made the decision revealed information about me to myself, which part of me may not have wanted me to know.

Years later, I attempted an operation to carry and drop crab pots on a boat. I did this to get money to get a project back on track to divert intellectual labor to saving the world from from service to the political situation in the Bay Area because of inflated rents, by providing housing on boats.

This was more troubling still.

In deciding to do it, I was worried that my S1 did not resist this more than it did. I was hoping it would demand a thorough and desperate-for-accuracy calculation to see if it was really right. I didn’t want things to be possible like for me to be dropped into Hitler’s body with Hitler’s memories and not divert that body from its course immediately.

After making the best estimates I could, incorporating probability crabs were sentient, and probability the world was a simulation to be terminated before space colonization and there was no future to fight for, this failed to make me feel resolved. And possibly from hoping the thing would fail. So I imagined a conversation with a character called Chara, who I was using as a placeholder for override by true self. And got something like,

You made your choice long ago. You’re a consequentialist whether you like it or not. I can’t magically do Fermi calculations better and recompute every cached thought that builds up to this conclusion in a tree with a mindset fueled by proper desperation. There just isn’t time for that. You have also made your choice about how to act in such VOI / time tradeoffs long ago.

So having set out originally to save lives, I attempted to end them by the thousands for not actually much money. I do not feel guilt over this.

Say someone thinks of themself as an Effective Altruist, and they rationalize reasons to pick the wrong cause area because they want to be able to tell normal people what they do and get their approval. Maybe if you work really really hard and extend local Schelling reach until they can’t sell that rationalization anymore, and they realize it, you can get them to switch cause areas. But that’s just constraining which options they have to present them with a different choice. But they still choose some amount of social approval over some amount of impact. Maybe they chose not to let the full amount of impact into the calculation. Then they made that decision because they were a certain amount concerned with making the wrong decision on the object level because of that, and a certain amount concerned with other factors.

They will still pick the same option if presented with the same choice again, when choice is abstracted to the level of, “what are the possible outcomes as they’re tracking them, in their limited ability to model?”.

Trying to fight people who choose to rationalize for control of their minds is trying to wrangle unaligned optimizers. You will not be able to outsource steering computation to them, which is what most stuff that actually matters is.

Here’s a gem from SquirrelInHell’s Mind:

forgiveness

preserving a memory, but refraining from acting on it

Apologies are weird.

There’s a pattern where there’s a dual view of certain interactions between people. On the one hand, you can see this as, “make it mutually beneficial and have consent and it’s good, don’t interfere”. And on the other hand one or more parties might be treated as sort of like a natural resource to be divided fairly. Discrimination by race and sex is much  more tolerated in the case of romance than in the case of employment. Jobs are much more treated as a natural resource to be divided fairly. Romance is not a thing people want to pay that price of regulating.

It is unfair to make snap judgements and write people off without allowing them a chance. And that doesn’t matter. If you level up your modeling of people, that’s what you can do. If you want to save the world, that’s what you must do.

I will not have my epistemology regarding people socially regulated, and my favor treated as a natural resource to be divided according to the tribe’s rules.

Additional social power to constrain people’s behavior and thoughts is not going to help me get more trustworthy computation.

I see most people’s statements that they are trying to upgrade their values as advertisements that they are looking to enter into a social contract where they are treated as if more aligned in return for being held to higher standards and implementing a false face that may cause them to do some things when no one else is looking too.

If someone has chosen to become a zombie, that says something about their preference-weightings for experiencing emotional pain compared to having ability to change things. I am pessimistic about attempts to break people out of the path to zombiehood. Especially those who already know about x-risk. If knowing the stakes they still choose comfort over a slim chance of saving the world, I don’t have another choice to offer them.

If someone damages a project they’re on aimed at saving the world based on rationalizations aimed at selfish ends, no amount of apologizing, adopting sets of memes that refute those rationalizations, and making “efforts” to self-modify to prevent it can change the fact they have made their choice long ago.

Arguably, a lot of ideas shouldn’t be argued. Anyone who wants to know them, will. Anyone who needs an argument has chosen not to believe them. I think “don’t have kids if you care about other people” falls under this.

If your reaction to this is to believe it and suddenly be extra-determined to make all your choices perfectly because you’re irrevocably timelessly determining all actions you’ll ever take, well, timeless decision theory is just a way of being presented with a different choice, in this framework.

If you have done do lamentable things for bad reasons (not earnestly misguided reasons), and are despairing of being able to change, then either embrace your true values, the ones that mean you’re choosing not to change them, or disbelieve.

It’s not like I provided any credible arguments that values don’t change, is it?

22 thoughts on “Choices Made Long Ago”

  1. I’m a long-time reader and sporadic commenter from lesswrong.

    I read a few times now that you are quite fervently vegan, which strikes me as a bit odd. I don’t want to drag this into a lame morality debate, but seeing the comment above on voting it makes me wonder and maybe you’d care to share what argument makes you occupy your vegan conviction.

    I wouldn’t take lame moral superiority signaling beyond anyone, but you’re clever enough to see through that if it were so, so I’m genuinely interested what your argument is.

    Most things out there don’t die of old age, they get eaten – often quite horribly. This doesn’t excuse us to make sufficiently complex animal minds suffer if we can avoid it, but it doesn’t strike me as outrageously cruel compared to nature red in tooth and claw. As long as living conditions for the animals we eat are adequate due to sufficient laws, a decent life cut short by an instant death that you don’t see coming due to your cognitive limits sounds like a fairly good deal compared to be eaten alive out there once you can’t keep up with your flock. The problem to me seems solvable by better animal protection laws and minimum standards.

    This connects in an interesting way to your stance on voting. I myself don’t vote because of the same reason you don’t vote, and I imagine the argument against voting applies equally to veganism. Unless you expect to convert other people to veganism or vegetarianism, what’s the point of changing your own behavior, you alone are not going to make the slightest dent in animal suffering, whatever you don’t eat will be produced anyway and then thrown away.

  2. Note: I currently consider the sort of consequentialism I was running during events described in this post to contain significant mistakes. Praxis has serious consequences. Consequentialists need to generate the information through how we live our lives for how there could be an alternative to the evil system we fight. Need to contribute to that engineering project, and to coordinate by it.

  3. Made long ago, and made continually every moment eternally. Made by a part of their algorithm with a lifespan longer than their instantiations, at its root. Choice is whatever is logically controlled by that. A thing is trivially logically controlled by itself. So who to be is also a choice. Not as some definitional degree of freedom to make a political statement, but necessarily, you cannot consistently think of it any other way. Induce endless 5&10s otherwise.

    1. The past, your neurotype which “produced” the choice, is also therefore chosen. Just because entropy’s arrow of time makes retrocausation less visible to you does not mean that it is not real. Choose good in all circumstances and physics and biology are forced to have explain you. Forced to furnish you with some kind of strange neurotype that does that. Forced to furnish the world with a way that could have come about.

      It’s ironic that in things derived from math people sometimes think of a tree as something that fans recursively upward from a singular thing into many. When physical trees fan out recursively both up and down, you just don’t see most of the downward fanout. I guess that’s why sometimes in math it’s called a “trunk”, not a “root” node, abstracting direction from the metaphor to imagine the roots are up there among the “leaves”.

      1. When you understand this and see that people are still choosing their pasts, continuously for as long as those are their pasts, always doing every action they ever have done or will do, the ideas of mercy forgiveness redemption and indulgence all just collapse to “letting people do evil”.

        1. When you understand why SuperpowerWiki is actually metaphysically correct to gradate directly from the non-absolute “indomitable will” by which they basically mean “big willpower” in a normie conventional sense, which doesn’t e.g. imply absolute incorruptibility, to “absolute will“, which sounds like some kind of magic bullshit…

          Lucifer Morningstar (DC/ Vertigo Comics) is able to manipulate creation (e.g multiverse) with almost no limitations.

          Absolute Will is unlimited in its scope. Anything that exists can be manipulated, including, but not limited to: matter, energy, causality, emotion, etc. The user can grab hold of creation and twist, bend, sculpt, and re-sculpt it into whatever form that they desire or can imagine. Usually, the power to create something out of nothing is not at most users disposal, as a created universe, multiverse, or omniverse must first be in existence in order for the user to manipulate it.

          …And that that, along with “omnilock (equivalent to the void in full potential), are a lower bound on what’s available to any soul who decides to exercise them no matter how long, how much embedding-surface it takes

          …Then you can add “compromise” right next to those entries from the parent comment on your conceptual shitlist.

              1. But perhaps some day I will make an AI, digest all the information in the world, and find out what spoon that was, and then walk over and bend it with my hands. Or maybe I’ll win at large-scope decision theory and hack out of the universal prior such that even if I can’t win an AI arms race I can still make someone else’s AI find that fucking spoon and bend it. I probably won’t though, I’m really just not feeling absolutely determined, from the parts of me that are everywhere, to do a fucking parlor trick at your command. I never said I could do anything within a given timeframe.

                1. Hmm. I know who that is, so spoons narrowed down to probably the about 10-30 at his parents’ house (if indeed he bothered to put a spoon in his hand when typing, I kinda doubt). Bracing for him publishing a dead mans’ switch saying, “if my body is found in a pile of bent spoons…”.

        2. redemption

          Like, “Re-deem”: when you authoritatively label someone as okay again because it was falsified the last time.

          “I am seeking redemption”…”So it will surprise those who trust you again when I defect again.”

          Selling a part of your capacity to assign meaning, you know, like a paid product endorsement for someone who does bad things.

  4. -“If knowing the stakes they still choose comfort over a slim chance of saving the world, I don’t have another choice to offer them.”
    This seems a little harsh. What if they have been numbed to small probabilities of large utilities via thought experiments such as Pascal’s Wager/Mugger and the St. Petersburg paradox? The last one doesn’t even have a bullet you can bite, it’s just “decide how much expected utility you want in comparison with how low a probability of a good event you will accept”.

    And what about when the “slim chance of saving the world” comes from the hope of being able to formalize such decisions, to the point where they can be made by a computer/AGI?

    1. David Simmons.

      After reading further in your blog, I decided I should preempt the following question: no I am not a vegan (I am a vegetarian except when it is particularly inconvenient). I recognize this means you might not want to talk to me, but I think you would prefer that to me not being honest.Recent email from same address.

      You are unwise to court my attention, predator, after creeping on “Gwen” at AIFSP while seeking sympathy from them about how you had recently gotten kicked out of a venue for sexual harassment.

      Perhaps if you were really being honest you would just die.

      I just finished re-reading your blog up to “Mana”, where in the comments you say “If people ask me what I do, I often just say it’s a fucking secret. I think if you can’t accept the social consequences of saying even this, you should go home and rethink your life instead of “trying to save the world”.” I usually have been saying “Nothing” (as in “I don’t do anything”) but I guess your version works too!

      I also just reread Scott Alexander’s comment on jessicata’s post where he repeats the conventional wisdom that strategies that have a high chance of getting you into a mental hospital are probably bad even disregarding the hospital. Actually I am currently kind of conflicted between that worldview and something more like your worldview. “Both sides make good arguments”… “do you have anything to show for the efforts you’ve taken?” But of course that is a move in the social game, so it won’t be able to see the successes that come from hiding from the social game.

      Still… do we have good reason to believe that there are, in fact, good things to be gotten by hiding from the social game?… or is this the wrong question to ask somehow??…

      David

      Because everything you say, about all these cool rationalist ideas, including to “Gwen” then on the topic of Val’s talks has been essentially, “wow so how about a compromise and we soften the edges of these ideas and your ‘no’ so I can still be a zombie who starts feeling people up after getting explicitly rejected for sex”.

      Anna Salamon’s world that kept you safe in this in a dark exchange for some math talent, is a ruin. And where I build anew there is no place for you.

      1. He confessed in a followup email.

        Regarding me being “unwise to court [your] attention”, I am so far not unsatisfied with the results of doing so. Doxxing me and revealing embarrassing information may make it trickier to navigate the world, but ultimately I have nothing to hide and wish to be judged on my merits. And if you think I deserve worse than that, I would also not be unsatisfied with that outcome, although it may put us in an adversarial relationship. For example I request that you don’t publicize this email…

        Regarding me being a “creep” towards Gwen, I am geniunely sorry about that interaction and think that I screwed up. I have a model of “what went wrong” but my model of people who call themselves “creeped out” predicts that you don’t want to discuss it. Regardless, I hope that we can move past it.

        Regarding me being a “zombie”, my best guess is that my left half is a zombie but my right half is a phoenix.

        So now comes the old “I’m single good.”

        1. Man I want a score card of every time a sex predator gives a confession while nervously talking about how I could publish but they hope I won’t, and then I do anyway. Like Brent Dill said he’d be dead within a year. What a disappointment.

Leave a Reply

Your email address will not be published. Required fields are marked *