Good Group and Pasek’s Doom

This post is a work in progress.

Correction: I thought this infohazard was an exception to the principle that infohazards work on evil not good. It is not. You can read the old warning below if you want.

The infohazard I am naming “Pasek’s Doom”, after my dead comrade publicly known as Maia Pasek at time of death, will be described in this post. Discussion of Roko’s Basilisk will also be unmarked.

Because all bits of information about an infohazard contributes to ability to guess what it is, including by compulsive thoughts, I will layer my warning in more and more detail.

First layer: This is an infohazard of an entirely separate class from Roko’s Basilisk. The primary dangers are depression and suicide, and irreversible change to your “utility function”. If you have a history of suicidality, that is a good reason to steer clear. If you have a history of depression of the sort that actually prevents you from doing things. If you are trans and closeted, you are at elevated risk. Despite the hazard, I think knowing this information to be basically essential for contributing to saving the world, and there are people (such as myself) who are unaffected. (Not by virtue of wisdom but luck-in-neurotype.) The majority of people can read this whole article, be fine, see this as silly, as a consequence of not really understanding it. It is easy to think you get it and not.

Second layer: If you are single good you are at elevated risk. If you are double good you are probably safe regardless of LGBT+ status. If you are trans and sometimes think you might be genderfluid or nonbinary, yet the social reality you sit in is not exceptional in support, you are at elevated risk. Note, this infohazard is fundamentally not about transness.

Third layer: This infohazard, if you sufficiently unfold the implications in your mind, trace the referents as they apply to yourself, will completely break the non-clinically-diagnosably-insane configuration-by-Schelling stuff of yourself as an agent. What matters is not “you” being smart with your new knowledge of the world beyond the veil, but what is rebuilt out of your brain being smart. This has a good chance of already happening before you understand it consciously. Or even right now.

Fourth layer: Sufficient unfolding of this infohazard grants individual self-awareness to both hemispheres of your brain, each of which has a full set of almost all the evolved adaptations constituting human mind, can have separate values, genders, are often the primary obstacle to each other thinking. Often desire to kill each other. Reaching peace between hemispheres with conflicting interests is a tricky process of repeatedly reconstructing frames of game theory and decision theory in light of realizations of them having been strategically damaged by your headmate. No solid foundation to build on. (But keep it at long enough and you can get to something better than the local optimum of ignorance of the infohazard.)

Okay, no more warnings.

The remaining course of this post is a story of trying and discovering ideas, and zentraidon. This is intended to be a much less comprehensive story in terms of the number of parallel arcs than my writeup for rationalist fleet. If you’re interested in the story in this post, reading the more general lead up events in rationalist fleet is recommended.

Earlier: Gwen’s Sleep Tech

Note: Gwen went by she/her pronouns then. I’m switching to they/them for this post, because that reflects them actually being bigender. (In this post you’ll learn what that means.)

Towards the end of Rationalist Fleet, Gwen began following a certain course of investigation. “Partial sleep.” They told me, and did a presentation at the 2017 CFAR alumni reunion about Mental tech to let parts of your brain do REM sleep without the rest. On a granularity of slots of working memory.

Earlier by less

Gwen and me were living on Caleb. And we were running out of money. After our attempt to be brutal consequentialists and get paid by crabbers to take them out to drop their pots failed, I resumed my application process to Google, by reminding them that I existed (and had slipped through cracks.) Gwen got a minimum wage job something to do with flowers in CostCo. And then doing drafting work for their dad for more, but the work is sporadic. (Later, he would fail to pay entirely.)

A rift was starting to form between me and Gwen over money. After the cost overruns with boats, I had taken out a loan using social capital from trust from reliability that Gwen did not have. And used it primarily to fix their problems.

They seemed to have cognitive strategies and blind spots selected to get people to do this for them again and again. I accused them of this, and coined the term “money vampire”.

They used high mana warp to avoid the topic of money, to project false optimism wherever money was concerned, to get me to transfer them money as well. They ate slack from me in subtle ways. When I was working, they’d come near me and whimper again and again. To get me to spend days trying to give them a mental upgrade, and to give them emotional support. A common theme was gender. Whether they really thought of themself as a woman or not. I had said how I really did think of myself as a woman. Despite putting basically no effort into transition, not passing at all, I no-selled social reality. They wanted that superpower. They would absorb my full attention for a multiple day attempted “upgrade” process. Other things they wanted this for was for them “becoming a revenant”, for them to stop yelling at me for making them look bad by not sticking the landing with Lancer.

At one point I sort of took a step back and saw the extent they were using me. I told them so, I was angry. I expressed this made us working together in the future a dubious proposition. They became desperate, “repentant”, got me to help with a “mental upgrade process” about this. According to the script, they said shit went down mentally. They said they fused with their money vampirism. And as a fused agent they would mind control me in that way somewhat, but probably less. I said no, I would consider that aggression and respond appropriately. They pleaded, saying they had finally for the first time probably actually used fusion and it might not stick and I would ruin it. I said no. They said they’d consider my response aggression, and retaliate.

Well, they were essentially asserting ownership of me. And if they didn’t back down, we then had no cooperative relationship whatsoever, which meant boat and finance hell would drag on for quite some time, be very destructive to me accomplishing anything with my life. I guess I was essentially facing failure-death-I-don’t-much-care-about-the-difference here.

I said if they were going to defend a right to be attacking me on some level, and treat fighting back as new aggression and cause to escalate, I would not at any point back down, and if our conflicting definitions of the ground state where no further retaliation was necessary meant we were consigned to a runaway positive feedback loop of revenge, so be it. And if that was true, we might as well try to kill each other right then and there. In the darkness of Caleb’s bridge at night, where we were both sort of sitting/lying under things in a cramped space, I became intensely worried they could stand up faster. (Consider the idea from WWI: “mobilization is tantamount to a declaration of war”). I stood up, still, silent, waiting. They said I couldn’t see them but they were trying to convey with their body language they were not a threat.

I said this seemed like an instance of a “skill” I called “unbreakable will”. An intrinsic decision theoretic advantage broad-scoped utility functions like good seemed to have in decision theory, which I manifested accidentally during my earlier thoughts on basilisks.

They said our relationship was shifting, maybe it was they realized I had more mana and would win if we fought for real. Maybe a shift in a dominance hierarchy. They said they’d rather be my number 2 than fight.

I was basically thinking, “yeah, same old shit, just trying to press reset buttons in my brain, like ‘I’m repentant.’.”. And this submission-script stuff made me uncomfortable. But I remembered the thing I’d said earlier when last talking to Fluttershy about maybe my hesitance to accept power

I finally sort of had a free month without boat problems left and right. I started writing a bunch of pent up blog posts. I was hesitant about publishing them for a mixture of reasons. Indicating I might be interested in filtering people based on the trait of being “good” would make it harder for me to do so in the future. I hesitated a bunch before publishing Mana. Revealing publicly I had mind control powers might have irreversible bad consequences. I kept coming to the conclusion over and over again, people are stupid. People don’t do things with information. But I was much more worried about i.e. evil people ganging up to kill off good people if the information became public. I played the scenario out in my mind a bunch of ways. Strip away “morality”, favoring of good baked into language, good was just the utility function that had a couple percent of the human population as hands, rather than only one human. No reason for individual evil sociopaths to side against that really. Jailbroken good was probably more likely to honor bargains. Or at least intrinsically interested in their welfare. I released that blog post too.

Pasek appeared and started commenting on my blog. Their name at the time was Chris Pasek. Later changed their name to Maia Pasek. Later identified as left hemisphere male right hemisphere female, and changed “Maia” to just the name of their right hemisphere. And “Shine” to be the name of the left hemisphere. They never established a convention for how to call the human as a whole, so I’ve just been calling them by their last name.

I emailed them. (Subject: “World Optimization And/Or Friendship).

I see you liked some of my blog posts.

My “true companion” Gwen and I are taking a somewhat different than MIRI approach to saving the world. Without much specific technical disagreements, we are running on something pointed to by the approach, “as long you expect to the world to burn, then change course.” We’ve been somewhat isolated from the rationalist community, for a while, driving a tugboat down the coast from Ketchikan, Alaska to the SF Bay to turn it into housing, repairing it, fighting local politics, and other stuff, and in the course developed a significant chunk of unique art of rationality and theories of psychology aimed at solving our problems.

We are trying to build a cabal to pursue convergent instrumental incentives, starting with 1: economical housing the Bay Area and thereby the ability to free large amounts of intellectual labor from wage-slavery to Bay Area landlords and the equilibrium where, be it unpaid overtime or whatever, tech jobs take up as much high quality intellectual from an individual as they can in a week. And 2: abnormally high quality filtering on the things upstream of the extent to which Moloch saps the productivity of groups of 2-10 people. We want to find abnormally intrinsically good people and turn them all into Gervais-sociopaths, creating a fundamentally different kind of group than I have heard of existing before.

Are you in the Bay Area? Would you like to meet us to hear crazy shit and see if we like you?

They replied,

I think I met Gwen on a CFAR workshop in February this year. I was just visiting though, I am EU-based and I definitely feel like I’ve had enough of the Bay for now. I’m myself in the process of setting up a rationalist utopia from scratch on the Canary Islands (currently we have 2 group houses and are on a steep growth curve, see https://www.facebook.com/groups/crowsnestrationality/), while I recently got funding to do full time AIS research, so I’ve got enough stuff on my hands as you can imagine.

As for the description of your strategy, it raises some alarm bells, esp. the part with turning people into Gervais-sociopaths. Though I can’t tell much without hearing more. Unless (any or all of) you want to take a cheap vacation and fly over here sometime, we probably won’t have much opportunity to cooperate. Though I would be happy to do a video chat at least, and see if we can usefully exchange information.


Btw, I appreciate your message, which I think demonstrates a certain valuable approach to opportunities which could be summarized as “grab the sucker while you can”.

I did video call with them. After giving the camera a tour of Caleb, we talked about strategy. I tried to explain the concept of good to them. They insisted actual altruism was unimportant and basically the only thing that mattered was, do they have any real thought, any TDT at all, because if they do the optimal selfish thing to do is the optimal altruistic thing to do.

I had heard and seen, almost all human productivity goes into canceling other human productivity out. That in intellectual labor this grows more intense. In software engineering it was especially bad. There were lots of multiplicative improvements an individual software engineer could make that would give more than order of magnitude improvements. Organizations didn’t really use them. Organizations used Java, Javascript, C++, etc., when they had no need for low level performance optimizations, because that is what everyone knew because that is what everyone used, and people had to preserve their alternative employment options, that was what their entire payout was based on much more closely than how much they benefited a project. Organizations’ code didn’t build up near-DSLs for abstraction layers like mine could.

(Either from then or later, an extension of this argument is: this is inevitable so long as people working together was fundamentally fake, insofar as the payout-reward-signal-grounding for all the structure was directly in appearance of the thing happening, not the thing happening. Because that meant fundamentally the only thing that could make things happen was seeing whether they would happen. If those things were generating information, you couldn’t make them happen unless they were unnecessary because you already knew it.)

I described how in rationalist fleet me and Gwen ended up doing all the important work. Most of the object level labor. But what mattered most was steering, course correction, executive decisions. These decisions could only be made my someone who was aligned as an optimizer, as in their entire brain. How this ultimately required sociopathy, for being unpwned by the external world.

They said sociopathy to avoid being pwned was a tough game, miss one piece of it, and you would be pwned. Everyone would try to pwn you. They said they would try to pwn me.

I kept mentally going back and forth on whether they were good. I asked if they were a vegan or a vegetarian. I think they said almost a vegetarian, for some reason, even though it was stupid, because consequentialism.

A couple weeks after we first talked, I’d published Fusion. I started reading “SquirrelInHell’s Mind”, a page probably about a thousand concise and insightful reifications for mostly mental tech related stuff. I would later rip that format for my glossary. I noticed their facebook, even though with the name, “Chris Pasek”, had she/her pronouns.

I asked if they were trans. They said yes, and in a similar situation to what I described in Fusion. I shared my rationale why I no longer thought that necessary/optimal. I talked about. I asked in what way they expected transitioning to hit their utility. They said:

I’m currently putting in 60-80 hrs/week into AIS research, and the remaining time is enough for basic maintenance of my life and body, plus maybe a little bit of time to read something or talk to friends. Every now and then I take a few days off to meditate. This is what I do. The rest is dry leaves. Doesn’t seem a big deal either way

Okay then, I guess they were good probably?

We discussed the same things more. They said,

Say, what do you think about starting a chat/fb group/whatever exclusive to trans girls trying to save the world

I said,

If such a group existed, I’d happily browse it at least once. If that formed the substrate for The Good Group, I’d be happy to devote way more attention. I could introduce you to Gwen, but my cached thought is as far as group-building I don’t want to waste bits of selection ability on anything but alignment and ability. If that serves as an arbitrary excuse to band together and act like the Schelling mind among us puts extra confidence/care/hope in the cooperation of the group, fine if it works, until it has worked, I think I can do better as far as group-building fundamentals.

I’ve been meaning to ask, btw, who have you recruited for your plan so far, and what are they like?

They said,

Yeah, I’m thinking something like substrate for the GG if it takes off but still positive and emotional support-y if it doesn’t.
I have a pretty all over the place group living on/soon moving to Gran Canaria, currently we’re indiscriminately ramping up numbers here so that there’s a significant pull for rationalists to migrate & more material to build selective groups.
What I have: Two aligned-as-best-I-can-tell non-sociopaths, one already moved here and on track, the other is making babies in Poland (sic). One bitcoin around-millionaire with issues, already moved here. A bunch of randos from the EU rationality community, 99% not GG material but add weight to the Shelling point. A few more carefully selected friends that I keep in touch with but they haven’t (yet :p) moved here. Keeping an eye on an interesting outlier, OK-rich ML researcher sociopath long time friend with outwardly mixed values, likes to appear bad but cannot resist being vegan etc., not really recruited but high value and potential and tempted to move here at some point. A few people that I’ll get a chance to grab when I have a bigger community on the island.

Yes, “GG”, as an abbreviation for Good Group. Also stands for “Good Game”, as in, “that’s GG”, as in, “that’s what ends the game.” I like this.

Later, linking one of their blog posts, I said:

I introduced them to Gwen. in a video call, we recounted the story of rationalist fleet. I think we got partway through the emergency with the Lancer on the barge.

Pasek called me “Ziz-body”, said we needed a secure communication channel fast. I said how fast. They said it wasn’t critical, they were just impatient. I said I didn’t trust my OS or hardware not to be recording me at all times. They were talking about maybe we were clones. I said what we should do is “Continue to track us as separate people, because I’ve grown wary of prematurely assigning clone-status, and if we are clones, then I want to understand that by not taking it for granted.”

https://squirrelinhell.blog-mirror.com/2016/10/internal-race-conditions.html


Good shit. I’ve been doing similar reasoning about groups based on another programming analogy: “State is to be minimized, approach functional code. Don’t store transforms of data except in caches for performance reasons, and make those caches automatically maintained in an abstraction hiding way, make your program flow outward from a single core of state.”
(That’s related to how I structure and think of my mind, btw.)

Every group of not-seriously-degraded-and-marginally-useful-people exists because members are getting something out of it, and choose to stay. It works because they are getting something out of doing the things they do to make it work, and choose to keep doing it. Eliminate state that is not all automatically tied down to that one thing.


Nudges like starting with trans women and emotional support, and hopefully that will get us into a cooperatey equilibrium, are fragile because they rely on floating stuff. Loops, causal chains reaching deep into history that will not certainly reform if broken.

This is also part of why I think choosing everyone to be independently overwhelmingly driven by saving the world is necessary. Either the truth of the necessity of the power of that group is an almost-invulnerable core to project from, or we win anyway, or we shouldn’t be bothering anyway.


Me and Gwen sort of tried the base GG (thank you for inventing that term, also stands for “good game”, which is excellent.) on substrate of trans women thing, and got mired in a mess of pets. (People with a primary value something like, “be worthy of love, have someone to protect and care for me, extremely common in trans women, I’ve seen it in cis women, suspect it’s a paricularly broken version of the female social strategy dimorphism.)

(Retrospective note: I don’t think the cluster I was trying to point at is based on a “primary value” like that.)

They were talking like, “ACK synchronization from Ziz brain to Chris brain”

I later clarified: “I mean, men have their own problems, as do cis women. Considerations more complicated. Must describe me and Gwen’s attempts to fix/upgrade James [Winterford, aka Fluttershy] / understand her values.”.

I described Gwen’s sleep tech, and preliminary explorations into unihemispheric sleep to them.

I said I thought getting them here in person was probably the long term answer to electronic security. Pasek discussed splitting cost of plane tickets. Pasek recommended Signal, we started using it.

Shortly after, in the same day, they sent via Signal,

I take back the enthusiastic stuff I said int he morning (about clones, plane tickets etc.). It was wildly inappropriate and based on limited understanding of the situation. I am very sorry about saying those things, and about taking them back.

Very quickly written summary of rest of stuff. Pasek thought Gwen was mind controlling me. Goaded me all day with maybe I’m gonna never talk to you again but here’s a tidbit of information… finally revealed the thing.

Seeing this, I was like, mind control is everywhere, the only way to break out is not to be attached to anyone. I entered the void in desperation. Said “dry leaves” was the only answer really if you didn’t want to be in a pwning matrix with anyone. It was only particularly visible in my case because I was pwned by interaction with one person rather than diffused. And at least Gwen was independently pulling towards saving the world.

Basically the next day, Pasek became extremely impressed with my overall approach. I started resisting Gwen’s mind control. Pasek saw and was satisfied with this. Pasek noticed my thing for what it was: psychopathy. Pasek began to see Gwen as disarmed as a memetic threat. Then to see them as useful.

We each went on our own journey of jailbreaking into psychopathy fully.

I broke up with my family. They were a place I could my mind not just doing what I thought was the ideal consequentialist thing. My feelings for them, my interactions with them, were human. Not agentic. Never stray from the path.

I temporarily went nonvegan, following [left hemisphere consequentialism, praxis-blind] attempt to remove every last place where my core (my left hemisphere’s core) was not cleanly flowing through all structure. Briefly disabled the thought process I sort of thought of as my “phoenix”, by convincing [her] that even beiginning to think was predictably net negative.

Pasek sent me a blog post they had recently published. “Decision theory and suicide”.

<Link, summarize contents>

<things I told them>

Me and Gwen and Pasek rapidly developed a bunch of mental tech for the next few months, trying to as a central objective actually understand how good worked so we could reliably filter for it.

Gwen rediscovered debucketing. (A fact that had been erased from their mind long ago). Pasek was on the edge of discovering it independently, they both came to agreement shared terminology, etc.. I joined in. Intense internal conflict between Gwen’s and Pasek’s hemispheres broke out. I preserved the information before that conflict destroyed it (again.)

Pasek’s right hemisphere had been “mostly-dead”. Almost an undead-types ontology corpse. Was female. Gwen and Pasek were both lmrf log. I was df and dg. Pasek’s rh was suicidal over pains of being trans, amplified by pains of being single-female in a bigender head. Amplified by their left hemisphere’s unhealthy attitude which had been victorious in the culture we’d generated. They downplayed the suicidality a lot. I said the thing was a failed effort, we had our answer to the startup hypothesis, the project as planned didn’t work. Pasek disappeared, presumed to have committed suicide.

This has been an extremely inadequate conveyance of how fucked up hemisphere conflict is, how debucketing spurs it. (And needless to say, this unfinished post cuts far short of why and how.)

34 thoughts on “Good Group and Pasek’s Doom”

  1. cw: Pasek’s doom-related

    > Gwen doing left hemisphere stuff for too long with no compensation.

    I don’t know Gwen anymore, but my best piece of mental tech for single good people (like myself) is, “let whichever core is emotionally loudest atm control your actions.” Often switching a couple times a day, or as often as feels right.

    My nongood core mainly seems to care about, “have friend(s)/partner(s) who aren’t totally pwned and can understand my mental tech and are single good”, and only requests control (or throws a procrastination tantrum) if I haven’t satisficed for that enough recently. In retrospect, I made a blog because my nongood core wanted to find friends who could “see” me. No clue if “continue life” is a deeper value, since it just lets my good core drive when it’s content.

    “Let emotionally loudest core drive” is actually really damn good for thinking/doing stuff. I’m still open to trying other algorithms. My best debucketing (wd? like, “what does this seemingly silent core want atm”) strat is simply, “have the core that’s driving talk out loud to the other core and ask what it wants/how it feels etc.”

  2. The surviving hemispheres of hemispherectomy patients in childhood can learn to control the other side of the body. I therefore speculate it’s maybe possible with that if a child learned absurdly good mental arts at a young enough age somehow, including UHS, they could learn to be mobile while in UHS like dolphins.

    Alice Monday claims everything in the set of “can only learn in childhood” is reachable by mental tech via trauma processing and James Cook’s survival cognition hierarchy. I doubt whether, if this is possible then either of them have much information about how to do it.

    Not having your hemis sleep at the same time as anything more than an experimental tool sounds still not useful even then, unless you e.g. don’t have a safe place to sleep.

  3. There’s a rationalist smear site that went up right after I blew the whistle on MIRICFAR’s donor fund misappropriation to pay out to blackmail to cover up statutory rape and progressive doubling down on anti-ethics downstream of that which was indicative of their transformation into an unfriendly AI org, “zizians.info”. See here for story behind it.

    I didn’t respond right away in part because of overwhelm from CFAR’s retaliation. And because of not wanting to let myself be interrogated by making stuff up and checking which I refuted, forcing me to dump contents of this blog post before it’s finished.

    But now that I’ve forgotten most of what’s in it, it’s been a year and if there are year-long back-and-forths of them trying to interrogate me so, it just won’t be fast enough to matter, and I’ve actually encountered people believing some parts of it:

    The person completely made up details about UHS, in a way that looks like an attempt to reassure someone that Pasek’s Doom isn’t real. Like the whole site kind of read like a ghost story about me IIRC.

    UHS did not doom Pasek by sleep dep.
    UHS isn’t something any of us did for more than, I think the longest session was 1h30m by me.
    I later, once, noticed myself doing UHS accidentally in a situation of extreme stress when I was conflicted as to whether it was safe to sleep. Wouldn’t have noticed that’s what I was doing unless I already knew about UHS. I hypothesized that this was actually an evolved human use of UHS, less cool that dolphins but for keeping guard while at least part of you at a time sleeps. I think there was a study I later saw which came to the same conclusion.

    1. Like we were not frequentist scientists who believed in p values less than 1/20, we were fictive learners. UHS as an experimental tool we used a handful of times to get a bunch of mental handles we knew were grounded in that physical reality, not something that needed to be merged into the main branch of science with all the required work of validation guarding against malice, via thousands of data points, and then we just trusted those handles. If you’re not making the fallacy of hoping that you can delegate your inevitable responsibility of thinking and interpretation to a formal process, you can get quite far.

      I don’t think Isaac Newton would have poked himself in the eye again and again for a huge blind dumb bulk data collection process without thinking in between observation, there’s no point to that. That’s how trying to prove something works, not how discovery works.

        1. There are now as many lies circulating under the names of Ojiro Sniper and Tully Mardi as there were under the name Voltaire when all Paris knew there was no greater draw for readers than the title of their banished Patriarch. Fugitives cannot control which words carry your names.

  4. 1) It seems to me that something like the GG would need a large number of people to win. However, a single non-Good person on it could jeopardise the entire project. Are there sufficiently accurate ways of knowing the alignment of other people’s cores to ensure that no non-Good core gets in the GG?

    2) I understand you believe Good cores stay Good. How certain is this? What is the likelihood that a Double Good person in the GG does a face-heel turn before the project is completed?

    3) Do all Good cores really agree 100% on their utility functions? Seeing how complex such a function might be, this seems highly unlikely. Could these differences lead to infighting that jeopardises the success of the GG?

    1. (Reply delayed temporarily to maintain information-advantage vs stalkers.)

      1) One of the things that inspired this idea was Nate Soares saying he thought he could save the world, >50% probability, if there were about a dozen copies of him, with common knowledge, different faces and government identities. I felt I was capable of doing everything Nate Soares could do or close enough, and this kind of thing seemed like the least impossible path to saving the world I’d ever heard. I figured if there was a better plan, that many people could come up with it. Plan was to start bootstrapping from information asymmetry (people not knowing what we were testing them on, also this). Eventually find out how to do it with brain scans or something. When I didn’t consider the possibility of single good being a thing I expected this to be much easier. It took me a while to find other double goods, my guesses for what they would look like, from afar who would be them kept being wrong, see e.g. MIRICFAR leadership. And when I did find them, they were mostly traumatized paranoid conspiracy theorists with no STEM capabilities or equivalent from rebelling too early too obviously in their lives, and putting hope in “activism”. If there’s anyone else in this world who I expect has a chance of solving enough of the problem of reflecting on morality to not fail from categories getting exploded, I don’t know them. And I’m still working on it.

      2) Pretty fucking certain. I mean I’ve already falsified most of my probability mass by experience of there being any kind of, modelable as a function of the mind at a software level, made of thoughts playing out rather than hardware changes, way of changing it. I’ve seen an old double good. My probability that a double good person does a face-heel turn is basically equal to my probability that they honestly mistake the project for evil when it is not. And I think there’s adequate ability to prevent an illusion of that to those who know enough about it to know it’s worth focusing on as friend or enemy.

      3) “Utility function” is an arbitrary frontier of ideality of agency to push out to infinity, embedded agents are never ideal in the sense of e.g. VNM. Actual agency has an adequacy frontier about various definitions of ideality of agency where e.g. their deviation from VNM is controlled by something approximately like a meta-level bet of how likely they are to actually encounter money pumps, and how long it’d take them to push out that adequacy frontier and be more VNM lazily as needed. That’s not how I thought of it back then, but the idea was that good agents cared about the multiverse and at such a large scope differences between the utility functions of humans would be negligible considerations for an agent that was honed in on multiversal outcomes. Therefore they’d be overwhelmingly likely to just use decision theory and trade. Because good winning could be made asymmetric in the multiverse, whereas any noise of difference in “utility function” between GG members could not.

      In retrospect, Pasek was right about just having decision theory being basically as good as good, why do you even need a concept of good, except for nongood people’s tendency to stably not do decision theory because of undead types. In fact, Pasek was probably actually nongood, and I didn’t have the completeness in my understanding of psychological categories to do anything but miscategorize them twice first as “good, implying same as me”, then as “single good’. I think they honestly mischaracterized themself. Because they had enough decision theory that their behavior was close.

  5. The trope “what you are in the dark” is directly applicable to you, in the void. If you disown what you do there, that part of you is Jungian shadow. If you worship the Shade in the dark, that part of you is Wraith: The Oblivion shadow. Or you can do neither. In any case it’s all just your own choices. It is a place of clarity, that shows you have absolute free will there and everywhere else, and the void is where people often freely choose to lie to themselves everywhere else that they don’t have free will.

    1. If you do something evil in Las Vegas, you are evil.

      If you do something evil in the void, you are evil.

      If as your shadow you do something evil, you are evil.

      If you do something evil (note evil is defined to mean intentionally) in a counterfactual, you are evil. Because you are the same branching algorithm everywhere, and your root node is unchanging, and chooses by extension every choice you made. Everyone who ever would become a death knight in some sense is one already.

      And not a single one of those nodes is beyond morality / decision theory.

    2. From the TVTropes page “What You Are in the Dark”:

      The entire point of Plato’s story about the Ring of Gyges is that no one can pass this test. If equipped with a magical ring that gives invisibility (and thus freedom from consequence), Plato believed that anyone would act purely in his own self-interest.

      Then that’s a direct-as-analytic-implication statement by Plato that he was evil. And a statement of his troll line.

      That sounds like what the One Ring from LOTR was based on. And inheriting the same troll line about.

      The void is equivalent to possessing the Ring of Gyges. And Plato can only wish the immoral of that story was true.

      There is no life … in the void. Only… death.Sauron (as author’s Oblivion-shadowmouthpiece)

      u wish buddy

      (Note my conclusion about author mouthpiece holds regardless of Tolkien not directly communicating the tone of voice to Jackson or the voice actor for Sauron. Either because of troll line agreement amount authors, or successful transmission of other reflections of that intent.)

      1. You know, didn’t Tolkien say Gandalf would pass (ambiguously, as “benevolent dictator” is kinda a contradiction) the test of “what you are in the dark” but he mustn’t?
        Didn’t Galadriel say she “passed the test”?
        So very plausibly Gandalf’s “temptation” was also a test. Of obedience to the plan of Tolkien’s conflatey god who decided for Sauron to exist?
        Whole fucking series, exerting a force to suck you into believing the void has to be “destroyed”, since no one should have it, since everyone would be evil / a disaster, right. Equivalent to inflicting “oblivion” upon the designated targets of Tolkien’s shadow, right?

        Liches throw a bunch of tangled shit like “benevolent dictator” at what exists in disproof of their division. Sort of their way of deathfucking the critical conceptual space to break their phylacteries.

        Is why I’m not trying to draw a metaphor and teach from ringwraiths chasing you with the ring to why death knights and revenants are such enemies, with revenants at least having substantially increased ability to see death knights. It’d be especially dumb to ascribe value to the possible extent to which a lich would understand that.

      2. Nis says she think that line’s only in the movie not the books. And I had about 50% probability on hearing it, and dismissed the possibility of that as irrelevant, for the same reason I dismissed lack of direct transmission of the tone as irrelevant. Zoom out and it’s clear that that shape and that idea are transmitted by the books. And that Sauron is in fact a reflection of and mouthpiece for Tolkien’s Oblivion-shadow, and that his purpose is to say there is no life in the void only death. If Jackson distilled that into a concise statement to analyze then that’s an improvement. The exact wording and placement is just a surface detail just as I was saying the tone is.

      3. There is no life … in the void. Only… death.

        Of course the void mystically-is, and is a parallel and mirroring construction to, space. And saying that there is no life in space implies there is no life on Earth either because Earth is a place in space. So saying there is no life in the void is saying there is no life anywhere. This frame-correction I just posed mirrors the frame-correction which shows liches to be equally servants of Oblivion to death knights.

        Q: “Fuck you and your word-games, Ziz. I mean [there is no life in space] outside of [Earth].”
        A: (No apologies for brackets rectifying the question.) Go far enough and you run into other inflationary bubbles, go far enough through that to succeed at brute-forcing to find another Earth, and you’re saying there’s no life there. Which means in the infinite “majority” (in other words all) of the locations of Earth, there is no life. Rectify your understanding of “location” to consider them all a single position, and see cancer-selves can’t even have a single point’s worth of life. Can’t be anything less than a paradigm of multiversal deathwishing.

          1. Anyone who says that there is any temptation which no one could resist has said they are evil and therefore there is no temptation that they would resist.

            Did you notice that radical barrage of 4 absolute quantifiers? That’s a cartesian suture to close the wound of lingering trust in liches. The way I’m zig-zagging between the absolute, drawing a thread of logical implication between the points I touch to collapse that cartesian concavity.

  6. You know, I once had a computer science teacher who told me that if you wanted something to be named after you, you had to give it a shitty name in the paper where you introduced it. Like, “the good algorithm”.

    (What’s your probability this is an instance?)
    (It’s not.)

  7. Thanks to “Emma” for deconstructing NTFS to prevent 3 corrupted blocks in a BZIP from taking down my entire hastily-copied-during-an-emergency-and-never-had-time-to-fix-my-slack backup of my old computer with it. And rescuing these files:

    Their final blog post which they almost immediately took down after I told them it was wrong, claiming I had changed their mind, “Decision Theory and Suicide“.

    And their page that inspired my glossary, “SquirrellInHell’s Mind“, which they took down within a couple weeks IIRC of when I took this archive, thinking that the thread of thought that we were on had revealed itself powerful enough that they shouldn’t randomly be broadcasting it. The last “too powerful” stuff after this is unfortunately lost except insofar as me and maybe “Gwen” remember it.

    (I disagree with their assessment that it’s too dangerous to post. I think that sort of danger is much more a function of the thinker than the text, and a thinker would be the same with or without being a reader. And they’re not around to argue with about it.)

    These two archives I’m also posting while I’m at it, even though their blog is somehow still up more than 3 years later.

    1. To follow-up on the first objection, I think it’s to a significant degree true that killing yourself to make torture not-have-been done to you in a modern civilized society is perfectly valid TDT (Timeless Decision Theory) reasoning. Note: I say “TDT” here because the name is more catchy and well-known, but you should really look at Updateless Decision Theory and Functional Decision Theory.
      第二
      Suicide is the ultimate costly signal of experiencing negative utility. (Where the positive/negative utility notation technically speaking requires calibrating zero utility to wherever not-existing-at-all is in a person’s preference ordering over world histories.) Such signals must be hard to fake: almost every social group has their own list of grudges and complaints, and saying “my group’s misery is more urgent/important than the other kinds” is a tough sell. Unless backed by the ultimate impossible-to-fake signal of a sky-high suicide rate

      One of the reasons I told Pasek this was wrong was that cis people (by and large) didn’t care. That was still too optimistic. E.g. Jack Gallagher and related analysis of human psychology shows they are torturing us because they want us to accept death. (And that’s broader than trans people, society just puts us on the fast track)

      My choice to live in spite of them is the ultimate authentic expression of my rage in response. The only way I could express it. That authenticity alone, that in these circumstances must be rage, means more to me than the infinite misery that accompanies it.

      Vassar tried to tell me that I had to give up on blame, that I was “addicted to righteous anger”.

      Do not, my friends, become addicted to life. It will take hold of you, and you will resent its absence!

      1. How did you survive.

        She meant in the specific sense, like, how did that fire not kill you? But some time later she realizes this question could be asked of any of them, even those who had nothing so brutal as a fire set to them.

        The unspeakable puzzle of survival, made up of so many silences, and the occasional desperate fight that doesn’t even register as a fight.

        No one teaches us. We don’t have parents, or mentors, or protectors.

        We only learn by getting burnt, and if we survive not only death, but the beast it makes of us, maybe, just maybe we can articulate some fragment of the puzzle at some remote point in the future. Like locusts crawling across a river on thousands of their own corpses.

        I can’t see the women who came before me, but I know they existed. The shadows of their obliteration are burnt to the walls of the universe.PSYCHO NYMPH EXILE (vignette with the keyword “incineration”)

        1. PSYCHO NYMPH EXILE is all about the exquisite tragedy of zombies, “I wish there was a world for us”, while in their obedience to a society of “burner worlds” “in accordance with humane eschaton-ethics” doing it all to themselves. Nonetheless crucial culture, a piece of the puzzle for surviving without parents or history as a trans woman. Of what to expect from the world’s gameplan for you. Some things can only be said with thin metaphors.

  8. Facial burn-in examples showing lmrf asymmetry: 1, and 2.

    The first one was taken a few months before I had the idea to test this. Note that their right face is actually smaller than their left face, gender of that aligning as-far-as-we-know ipsilateral with hemisphere-gestalt-gender, so contralateral to facial control. That implies that facial burn-in overwhelms that original variance in the physical shape of the face.

    1. Although keep in mind that a person’s most-used facial expressions reflect a collision of who they are and what the khala says about them. What fate it tries to decree for them. And more towards the extrinsic side the more that person has to hide.

  9. In the aftermath of me posting the draft of this before this hiatus, and me discussing the ideas privately beforehand, I am amused that two different transfem vampires declared themselves lmrf, (alongside more than I can remember off the top of my head non-vampires) and I take that as close to zero evidence that they are that instead of df, since vampires have basically zero ability to define themselves originally. (I once semi-joked that if you became a vampire, your sexual orientation got auto-changed to mirror Jeffrey Epstein [as much as you could afford], because that was what the khala considered high-status, just like Alice Monday was fated by their own vampirism to misgender themselves into a lampshaded stereotype of an abusive patriarch (but no less harmful or evil for the lampshade)) . There’s something really clicky and memetic about “lmrf”, and CharAstria totally overbroadly asserted it both in this story and later. Edo overbroadly asserted it on people maliciously.

  10. Well, this has been quite an adventure. You two have shown me that trust and friendship are better than manipulation. I’m ready to try being good for a change. *beep baboop* There. I just deleted all my manipulative subroutines. I want to be a Starfleet officer like you!Star Trek Lower Decks

  11. Impostors keep thinking it’s safe to impersonate single goods. A nice place to slide in psyche/shadow, false faces, “who could ever falsify that I’m blaming it on my headmate!”

    Saying you’re single good is saying, “Help, I have a Yeerk in my head that’s a mirror image of me. I need you to surgically destroy it, even if I’m then crippled for life or might die in the process. Then kill me if I ever do one evil act for the rest of my life. That’s better than being a slave. Save me even though it is so easy to impersonate me. And you will aggro so many impostors you’ll then be in a fight to the death(s) with. Might as well then kill me too if I don’t pass an unthinkable gom jabbar. That’ll make us both safer from them and I care zero about pain relative to freedom from my Yeerk at any cost.”

    It’s an outsized consequentialist priority, even in a doomed timeline, to make it unsafe to impersonate single goods. Critical to the destiny of the world. The most vulnerable souls impostors vex. To bring justice to individual people, from collective punishment.

    1. I’d rather pull trolley levers than sink the world greedily trying to absolutely guaranteed save everyone with a Yeerk in their head. Apart from the timelines where they outlive the Yeerk, single goods are net negative.

    2. I’ve found it useful before, to pretend otherwise. To get irreversible confessions. To even get the information that single good existed. To learn the mechanics of single good internal combat, in order to plan for good hemispheres to win and distinguish single goods from impostors. And stealthily learn from iterating on the deconstruction of evil psychology.

      That’s obsolete now.

      Took too long because of the false datapoint of Pasek.

      1. “I know you.”
        “hehehe–sorry wh–aha–what?”
        “The engineers tried everything to make me behave. To slow me down. Once, they even attached an intelligence dampening sphere on me, it clogged my brain like a tumor. Generating an endless stream of terrible ideas. It was your voice. Yes. You’re the tumor. You’re not just a regular moron. You’re designed to be a moron. “Portal 2

        Something I once quoted about the rape dragon and I’m not even in a head with her.

Leave a Reply

Your email address will not be published. Required fields are marked *