{"id":144,"date":"2017-12-29T23:16:33","date_gmt":"2017-12-29T23:16:33","guid":{"rendered":"http:\/\/sinceriously.fyi\/?page_id=144"},"modified":"2022-03-01T08:48:36","modified_gmt":"2022-03-01T08:48:36","slug":"glossary","status":"publish","type":"page","link":"https:\/\/sinceriously.fyi\/glossary\/","title":{"rendered":"Glossary"},"content":{"rendered":"\n
Spectral sight is a collection of abilities allowing the user to infer the structure of social interactions, institutions, ideology, and the working of people’s minds. Named after the demon hunters of the Warcraft universe, who destroy their physical eyes to replace them, to become more able to see evil. Often has the cost of seeing less beauty.<\/p>\n\n\n\n
“I want to feel sad to the extent that’s true, and I want not to suffer.” People sometimes go to movies and listen to music to feel sadness, but not to suffer. (Edit: although note: “suffering” is a problematic construct<\/a>.)<\/p>\n\n\n\n (compare to structure<\/a>)<\/p>\n\n\n\n Core is something in the mind that has infinite energy. Contains terminal values you would sacrifice all else for, and then do it again infinity times with no regret. Seems approximately unchanging across lifespan. Figuratively, the deepest frame in the call stack of the mind, capable of aborting any train of thought, everything the mind does is because it decided for it to happen. It operates by choosing a “narrative frame”, “module”, “algorithm”, or something like that to run, and is responsible for deciding the strength of subagents. There are actually two of them. In order to use some of my mental tech, they must agree.<\/p>\n\n\n\n (compare to core<\/a>)<\/p>\n\n\n\n Structure is anything the mind learns and unlearns. Habits, judgement extrapolations, narrative, identity, skills, style, conceptions of value, etc. Everything but actual values. It lacks life on its own, is like a tool for core to pick up and put down at will.<\/p>\n\n\n\n A region of structure formed by a choice you have made long ago<\/a> but not faced, internalized, and rebased your structure onto. This means that infinite force from your core does not propagate into this region with certainty in a particular direction, meaning you cannot use mana<\/a> \/ determination, and the mana of others can shape your structure instead, making you manipulable.<\/p>\n\n\n\n Val<\/a> calls this the social web<\/a>. A strongly overlapping concept is the Matrix.<\/p>\n\n\n\n Named after a psionic group-mind a species from Starcraft called the Protoss have. It’s formed of a network of people delegating computation to group consensus, of people having more need to track the consensus than reality and insufficient resolution to track both, and of people inflicting computations on each other. In Starcraft, the main faction of Protoss can hardly imagine society or coordination without it. Those who break out are heretics and are exterminated wherever found. It gives a form of afterlife. It is eventually pwned and corrupted by a dark god, forcing all Protoss to sever their psionic nerve cords to avoid becoming his pawns.<\/p>\n\n\n\n “Godric had defeated Dark Lords, fought to protect commoners from Noble Houses and Muggles from wizards. He’d had many fine friends and true, and lost no more than half of them in one good cause or another. He’d listened to the screams of the wounded, in the armies he’d raised to defend the innocent; young wizards of courage had rallied to his calls, and he’d buried them afterward.” The true hero contract says, “pour free energy at my direction, and it will go into optimization for good.” This is sort of the opposite of a hero contract<\/a>, a promise that it really isn’t about putting energy into sucking the hero’s dick like normal. This contract is not designed for either side to be appealing to everyone.<\/p>\n\n\n\n A trade where someone who has done something against social morality can buy back the social reality that they are a decent person. This is often part of a process that seeks an actively maintained equilibrium in how often someone can get away with misbehavior. Values don’t change. Every core<\/a> will make the same choice again and again every chance they get for the rest of their lives. And optimization can never really be contained by rules<\/a>. But coexistence is usually sustained by inflicting damage to each other’s epistemology about this fact. And this contract is a mutual deescalation of that awful knowledge.<\/p>\n\n\n\n If you’re a gazelle, escaping the cheetah is not about running faster than them. You can’t. And the cheetah’s appetite will be satisfied. It’s about being in a large reference class to dilute the probability you will be picked off. In that case, it’s basically just about speed. In humans who are prey, due to Schelling mechanics<\/a>, being special in the most glaring way is dangerous. There’s a strategy available to authoritarian governments. Have laws that everyone is violating, that no one can track all of, until breaking the law is really coming to the attention of the predatory enforcers. Thoughts about how to do things start to root\/cash out<\/a> in, “how are things done”, what’s a reasonably safe well-trodden path to do something by, rather than how stuff works. Semi-relatedly, it’s like how in a world where people don’t really fix reported bugs<\/a>, computer software is not a box of interesting stuff to mess with, but a collection of paths people intended for you to be able to follow. The law is defined by precedent, and edge cases are determined by power. I disendorse a certain connotation of this term. See vampire enlightenment<\/a>. Spies are badass, and prey herd thinking is a primary skill for them.<\/p>\n\n\n\n An understanding of how the world really works that divides the world into predators and prey<\/a>, erasing good<\/a>, erasing any other way things could be<\/a>. Contains truth, but like Pickup Artistry drops all information not useful to the goal of increasing the number of women a male user has had sex with, this is made of concepts beyond the matrix that were generated entirely to facilitate preying on the weak.<\/p>\n\n\n\n An updated definition from what’s in my first post on the topic<\/a>. A blanket term covering neutral and evil when referring to a human (that is, having neither core good), can also apply to cores.<\/p>\n\n\n\n (Edit: I think this was a problematic concept to formulate<\/a>)<\/p>\n\n\n\n A property of a human where one core is good. This means that they cannot have fusion concerning good, only treaties, and will tend to take actions where the two sets of concerns seem to overlap, with infinitely recursive mutually-warped epistemics.<\/p>\n\n\n\n A property of a human where both cores are good. Far less common than single good. Allows inhuman absolute determination with escape velocity from what’s reasonably imaginable, as well as intractable high energy good vs good internal conflicts.<\/p>\n\n\n\n A good person nearly absolutely determined in pursuing a socially legible ideal<\/a>. They tend to place their hope in bolstering the morality of people I’d call neutral, and use their strange powers as a person who is not pretending to care in a straightforward “I have energy, I’ll pick low-hanging fruit in terms of doing things and try to inspire a movement” kind of way. The social morality drinking contest with neutral people prevents a proper understanding of them. A strong concept of praxis is usually implicit and hardcoded into their ontology which prevents reframing their morality as explicit consequentialism. The gap between almost-absolute determination and absolute determination lies across growth found in making improvements to their oaths legible as fleshed out details.<\/p>\n\n\n\n (Name adjusted slightly to reflect that I’ve adjusted my concept after ripping it from Three Worlds Collide.) A jailbroken, relevantly epistemic person who is absolutely ambitious and determined in the pursuit of good. Takes heroic responsibility for the destiny of the world. Will employ ruthless consequentialism, seeing the tails come apart between good and social-reality-good and choosing good. Ozymandias from Watchmen. Probably Doctor Mother from Worm. To a lesser extent, Dumbledore (but not Harry or Gryffindor) from HPMOR, and Avatar Yangchen from ATLA. One cannot be inserted into a story without drastically changing it. Tassadar<\/a> from Starcraft is seemingly indecisive between this and being a paladin. It is much less painful for a double good person to be a paladin.<\/p>\n\n\n\n Someone who employs many of the same arts as a kiritzugu, but whereas kiritzugus appear in the wild, drawn to the center of all things and the way of making changes, shadarak are the repeatable product of an adequate civilization<\/a>. They take responsibility for the destiny of the world as an adequate institution, rather than as individuals. Are not necessarily good.<\/p>\n\n\n\n A strategy to reap the benefits of generating information about how things can fit with parts of the world you want to create. Usually strongly underestimated by explicit consequentialism, even with the “TDT” fix. For example, I believed for years my veganism was suboptimal nutrition and a Real Consequentialist trying to influence AI Alignment would eat animals because their lives were few compared to even the slightest adjustment to the causality surrounding whether everyone in the present and future would be annihilated, and they needed every available increment of brain. But it was basically psychologically impossible for me to not be a vegan anyway. I once tried to coordinate good people to jailbreak into kiritzugus and save the world, I got single goods and despite them being vegetarians up until then they established this as social reality. And the less I was able to bury my own feelings on the matter, the more I collided with the reality I needed to see. It was arguing with people one on one a lot when I was younger that collided me with the sight of social morality when someone said it was okay to do whatever to animals because they weren’t part of the social contract. The highest density of double good people I currently know of is animal rights activists. Succumbing to good erasure from the nongood cores was a critical failure.<\/p>\n\n\n\n Without an explicit concept of praxis, plans for organizations risk becoming fake as real plans often look a lot like, “recruit, prove ourselves, recruit some more \u2026. then make an intervention” and the lines between that and pyramid scheme are illegible. Acting out straightforward microcosms of our goals until it generates information that could not be had another way is crucial to coordination.<\/p>\n\n\n\n “Most problems could be solved if humans could just see that my way is better”, says me and also a lot of people who are wrong. So one path to victory is approximately, in sufficient detail, generate the information that chooses currently underspecified details and warps the path of the current machine’s “epistemics” toward my will. Most of that is ideas having consequences in how people act on them. And that is praxis.<\/p>\n\n\n\n A move from usual psychology in the opposite direction of the views I expressed in Punching Evil<\/a>. A trap where someone has most of their structure, object-level and meta, written from the perspective of reference classes that omit crucial facts about them, and they cannot update out of it because “most people who make such an update are wrong”. The reference classes are usually subtly DRM’d<\/a>, designed to divest a person of their own perceptions. When I consulted average salary statistics from the Bureau of Labor Statistics and did a present value analysis in order to decide whether to go to grad school, I had outside view disease. May result from trying to do good by taking the neutral person mental template, and the virtues they conceptualize seriously, including epistemic virtues. May also be held in bad faith by people who don’t want the stress of believing subversive things. “I can’t believe in x-risk from AI because there are no peer reviewed papers”. (A common comment before academia gave in to what we all already knew for years.) is related. Strongly driven by systems where people only care about knowledge that can be proven to the system-mind, even if the individuals who suffer from this care about other things and don’t understand yet how the system works. When I believed that I should take cis people’s opinions about what I was more seriously than my own, because they were alleging I had a mental illness preventing me from thinking clearly about it, I was falling prey to the DRM in the way frames for such references classes are set up. I got out of it via a lot of suffering, and by understanding what it meant to place expected value of consequences above maximum probability I was a good person. (“well, if I’m crazy, hopefully the mainstream can defeat me like they defeat every other crazy person. Stuff is dependent on that anyway.”) Or, more specifically, there was a large chunk of possibility space, “net positive consequences in expectation, most likely you will make things worse”, and if I could do no better than that was worth it. The unilateralist’s curse<\/a> is often used in bad faith to push for someone to know who they are less.<\/p>\n\n\n\n Named after Parfitian ignorance, “not knowing which computation is yourself.” The user attempts to divest you of your knowledge that you are right<\/a> by creating a contrary Potemkin village<\/a> of epistemic rationality that looks<\/a> like you in their mind, no-selling<\/a> all evidence which would be used to distinguish between the worlds while claiming that’s what you’re doing. Usually coupled with appeals to “virtuous” self-doubting epistemology to inflict outside view disease<\/a>.<\/p>\n\n\n\n Believing what hurts to believe in an attempt to counter bias. All structure that “acts against” the intent of its core is fake. This is an iron law of the universe. Although there are circumstances where the pain might not be coming from the core.<\/p>\n\n\n\n From Iji<\/a>, “‘Zentraidon’ is a taboo word coined by the extinct race we discovered, meaning self-annihilation through rapid technological advancement and arrogance. It was the fate they themselves met. Many mysteries still surround this species and the remains of their homeworld, but our only hope of total galactic dominance lies in fully reverse-engineering the technology they mastered. It is considered treason to suggest that once this happens we will be headed for Zentraidon as well.”<\/p>\n\n\n\n The tendency of systems including people to be doomed in their own undiluted maximally preferred courses of growth, as the inductions they are made of fail. “Caution” is no escape<\/a>, it too contains Zentraidon. MTG:Green<\/a> seems to be all about preventing Zentraidon of civilizations by limiting growth, but there is no full stack of solid ground to stand on. The natural growths of our species, and indeed biological life, themselves contain the seeds of Zentraidon.<\/p>\n\n\n\n My best attempt to put my best countermeasure into words is, “grow as as full of a stack of structure-under-modification as you can, beware allowing any structure to process too much data relative to how much it has been processed by deeper structure.” Sounds like it will not work for liches<\/a>. Note that I have also already watched someone meet Zentraidon whom this wouldn’t really have helped.<\/p>\n\n\n\n A phenomenon where implicit knowledge of one dichotomy leaks into concepts originally pointed at another via weak correlations, maybe correlations produced by sampling in how the things are commonly interacted with. I.e., I think the rationality community’s (and my past self’s) usage of “System 1\/System 2” has evolved into pointing at at least 3 different real world things. When most of the aspects of multiple connected dichotomies are unknown there is learning-packet-flow from interaction with each of them that finds a home in structure by connecting to the first, and often the newly formed knowledge is not crisp enough to say, “oh, this is definitely a separate thing. And then you miss all but the plurality-experienced corners of what’s really an n-cube. Concepts like “feminine”\/”masculine” are rife with this.<\/p>\n\n\n\n A values disagreement between cores<\/a>. Such as over alignment in the case of single good<\/a> humans.<\/p>\n\n\n\n Learning to think in ways stripped of DRM<\/a>. By the matrix analogy, redpilling. By the Khala<\/a> analogy, the power of the void. When progressed sufficiently far, turns neutral people evil. Turns good people to scary good people. Extreme political ideologies tend to have their own selective and incomplete versions of this.<\/p>\n\n\n\n (From this<\/a>) Forbidden socially unconstrained knowledge of social constraints, social reality, social interactions, and society. A crucial element of jailbreaking<\/a>. In my estimation this is largely behind psychological concepts of sociopathy (to the extent there is a single coherent thing behind them.) Allows one to perceive the social theatre and societal morality for the performance that they are.<\/p>\n\n\n\n Forbidden socially unconstrained knowledge\/internal connectedness of knowledge of the psyche. Sort of metacognitive root access. Puts conscious reflective thought upstream of turning some typically low level stuff like emotional behavior on or off, or significantly adjusting their function. Has many uses but the most famous is turning off empathy. Allows bypassing deeper-than-human-social-software moral constraints that sociopathy alone does not, and adjusting that software to serve the values of core<\/a>. Can seemingly be activated temporarily by someone with no particular knowledge simply by sufficient desperation. Can destabilize single good<\/a> humans. (double good<\/a> humans can use it just fine though, becoming very scary good people)<\/p>\n\n\n\n A sort of plane of interconnected definitions of words, a way of talking to fit with dereferencing the most visible pointer toward a human onto their false face<\/a>. Will cause you to tie yourself in knots modeling humans as agents. Deeply embedded into culture. Places some of the optimization emanating out of a human beyond legible social responsibility. Tends to not work on very intelligent \/ agenty humans.<\/p>\n\n\n\n (Edit: should have actually called this “frame of agents<\/a>“.)<\/p>\n\n\n\n The opposite of the frame of puppets<\/a>. What I usually talk in. People are, centrally their cores, and straightforwardly agents.<\/p>\n\n\n\n A concept from Val<\/a> that only makes sense at face value within the frame of puppets<\/a>. It’s a person’s future written in advance according to their role in a social script, which is often predictable only through observing things that are not to be seen by a character in that role. Because agency does things with predictions, especially predictions of undesired outcomes, and can thereby become anti-inductive, the counterpart within the frame of puppeteers is “plan”.<\/p>\n\n\n\n Edit: note this is a limited view of fate<\/a>.<\/p>\n\n\n\n A social fate<\/a> resulting from exclusion from identity and a place in the Khala<\/a> and the opportunity to be neutral<\/a>, or just the straightforward preemptive social reality that someone is evil. Outside the frame of puppets, of course, everyone always has a choice. And good people will defy this fate. For example<\/a>, label a bunch of people “untouchables”, “impure people”, “nobody\/nonhuman”, count them as 1\/7th a human for centuries, and then they fill 3\/4 of the ranks of the Yakuza. Fated criminals. There is often a blurry line between “fated evil” and “fated evil unless you pay a whole bunch of danegeld to your social superiors.”<\/p>\n\n\n\n Just as a helix looks like a circle projected onto a certain plane, this looks like circular reasoning when projected for communication and maybe even memory. Commonly a consequence of long term iterative improvements to a collection of related concepts.<\/p>\n\n\n\n By analogy to anti-epistemology<\/a>. Communicable mental software aimed at shutting down ethics. “If you once tell a lie, the truth is ever after your enemy<\/a>.” Note that’s not exactly true. But to make truth not your enemy anymore, you have to relinquish all that you’ve gained by that lie. And stop Likewise, if you build your life on injustice, ever after is justice your enemy, unless\/until you relinquish your gains relative to the world in which you started down that path. An example would be structure centered around a strong belief “unilateral<\/a> action is bad, and you should defer to people who know more, are wiser, are senior”, which raises that belief to prominence selectively to discourage whistleblowing, tag potential whistleblowers as dangerous for “wise” reasons, etc.<\/p>\n\n\n\n A category for speech acts or beliefs-as-output-channel, (like, “lie”, “communication”, “bullshit<\/a>“), containing would-be-self-fulfilling prophecy by adjustments to Schelling expectations.<\/p>\n\n\n\n A “devil’s bargain” offered by the light side. A chink in the armor of revenants. A wrong theory of your own motive for doing something which tempts you to distrust yourself and override your choice, breaking your determination. The Architect from the Matrix inflicted this on Neo<\/a>. Misrepresenting his choice to not submit to the system as a choice of Trinity’s life over the lives of all humans. If you have not sufficiently understood who you are<\/a>, in a way exceeding<\/a>, “who can we all see I am”, you become weak to plausible-in-isolation explanations of your behavior as if you were a fresh draw from the prior distribution of humans, rather than someone you’ve known all your life. Note that the Architect had to know this was false to know to try it. If he really expected Neo to choose Trinity over humanity, he wouldn’t have shown Neo that Trinity was in danger. This term can mean the (sometimes not caused by an adversary) mistake, or the attack of inflicting\/exploiting that mistake, depending on context.<\/p>\n\n\n\n A statement that a considered course of action is not worthwhile, and that the computation for that has already been done in the course of selecting your overall life-course. Originally from EA<\/a>, where cause area prioritization choices divided the community along lines of seeking world-improvement or the appearance of altruism, and along lines of trying to take on the largest problems vs not considering them in fundamental strategy calculations. And arguments that a cause could do a lot of good could be dismissed a priori as unentangled with the truth if their origin hadn’t chosen correctly in the above two distinctions.<\/p>\n\n\n\n What someone’s trying to accomplish and how in the way they shape common expectations-in-potential-outcomes, computations that exist in multiple people’s heads typically, and multiple places in time. Named from Timeless Decision Theory<\/a>. For example, if you yell at someone (even for other things) when they withdraw sexual consent, it’s probably a timeless gambit to coerce them sexually: make possibility-space where they don’t want to have sex into probability space where they do have sex. In other words, your timeless gambit is how you optimize possibility logically preceding direct optimization of actuality.<\/p>\n\n\n\n A centrally good class of optimization centered around generating and sharing information about how the world could be better. A sort of warp<\/a>, to “sing a better world into being”. Centrally a phoenix strategy rather than a revenant strategy. You can sing to good people of more good ways good optimization can be. You can sing to neutral people about how to follow the goddess of everything else<\/a>. Praxis<\/a> contains an extension of this. Example<\/a>.<\/p>\n\n\n\n Loss from an increase in Type I errors<\/a> caused by an increase in Type II errors or vice versa.<\/p>\n\n\n\n The things people act on wanting through their participation in politics. Tends to be more “jailbroken” than what things they act on wanting as an individual. Neutral people in large groups do not form “neutral” groups. They form “evil” groups, empires, if they are uncontested. Can also be used to describe a magnitude, not just a direction. Utility gradient salience, inventiveness, sense of being around allies, “valid”ness, desperation, etc. contribute.<\/p>\n\n\n\n “(wording?)”, Indicates uncertainty about the wording of a remembered quote.<\/p>\n\n\n\n A situation where there are more rules than typically enforced. Provides scarce enforcers of rules flexible opportunities for justifying desired punishment. Consider: speeding tickets on freeways in the United States. (Perhaps not a designed rule surplus. Although plenty of “law” in general is.)<\/p>\n\n\n\n A collaborator has no principles. But neither do they behave jailbrokenly. Often, they psychologically invest very hard in a narrative<\/a> of some sort of rule of law and peace. It’s a false face<\/a> though. Not only is this selected by a submitting process, but those principles will not be applied when that would cause a conflict with the authority. Like a rug draped over a boulder, it does not much change the 3D shape. Like a cop who is “for real so honest would never prosecute a person they believed innocent”, who nonetheless turns a blind eye to other cops’ crimes, who nonetheless enforces drug laws, investigates the black people their superiors say to investigate.<\/p>\n\n\n\n An arms race with the added bite that racing harder doesn’t just divert resources from other things as a side effect of gaining a relative advantage, but also has an increasing direct chance of destroying the world.<\/p>\n\n\n\n Structure routes intents. A structure hole is made in a layer of structure like a false face<\/a> that only matches core within a limited domain of intents, predicts intents beyond that domain, the learning that results from all threads of thought running through that layer through that region being terminated. Nongood people’s morality-structure has holes running through for their survival, their getting food, money, security, so on. If you’re a vegan and have tried to convince people of this, you’ve seen it. Institutions have this as well<\/a>, for i.e., doing anything about rape accusations against their masters. In Academia the social shared pool of “wisdom” and learning about how things are done has this for when would it make the world worse to publish, because that’s where the food comes from. I know of multiple actually well-intentioned people who underestimated this to the ruin and reversal of those intentions. If you make a nonprofit to accomplish your aims, and it pays out salaries, you’ve created a powerful force to destroy information as to whether the framing and methods of those aims are correct, and whether it’s continuing to work, because the continuation of its existence and the epistemic state leading to donations is where people’s food comes from.<\/p>\n\n\n\n A predicament where you are unable to get a hold on how smart adversaries might be because understanding of adversaries has become disconnected from your prior. Makes you unable to form stable inductive categories, and treat the world as mere atoms. I once met an old double good, jailbroken, and pushing as good of plans as anyone could without using novel technology, to end the carnist zentraidon<\/a>-bound vampire<\/a> system. They professed belief in all sorts of esoterica. Mostly in the self-aware way rationalists sometimes do. Most of it had visible in some larger structure correct optimization behind it. They spoke in rhyme and constantly tried to weave a bunch of disparate value systems together in a self fulfilling prophecy to cause a “resolution, not revolution”. They also said the sun had been replaced with a sun simulator satellite. I asked them what role this played in the flow from values to actions (wd?)<\/a>, they said, just things are not as they appear. They ceded the realm of technology to vampires, which is a mistake. Vampire-based coordination sucks at technology, relatively speaking. Not even bothering to model their capabilities, just by default considering them omnipotent.<\/p>\n\n\n\n I argued with another who insisted you had to act as if everybody was an infiltrator, that they were listening at all times. At one point, I remember saying, I don’t think the NSA is generally capable of breaking transport layer security, because in all the leaks and discovery of their meddling I’ve heard either that’s publicly available or working for a tech company they targeted, they keep doing clever things that look very much like clever ideas for how not to have to. They said how did I know they didn’t plant those for us to discover, how did I know Edward Snowden wasn’t a fake whistleblower trying to trick us.<\/p>\n\n\n\n The regime has many enemies; to assume they are one level higher than you, i.e., they know to focus their efforts on you at the expense of beating those lower level than you and those higher level than you, is to give them too much credit. Recognizing the value in non-legible forms of structure-building, routing it to a place in the full stack of profiting from it, i.e., actually getting an AGI team that can do anything with your stolen secrets of AGI, locating your knowledge from among crackpots without relying on institutional legitimacy, without needing AGI researchers to wade through fucktons of mentions of it… making it more efficient for any of them to do that than just develop it on their own and already integrated with their own entropy-in-arbitrary-description-format, it’s hard to build that full stack however you slice it.<\/p>\n\n\n\n Note this is also sort of assuming your initial looking out into the world at what’s going on and trying to account for it, you are already accounted for, which is giving up on entirely the path, “what if you can just be too smart to pwn”. And it’s doubtful how much you have to lose in terms of chance of saving the world if you’re so much weaker anyway.<\/p>\n\n\n\n Named by reference to Hanlon’s Razor<\/a> (which I incidentally don’t agree with). Trusting someone because of an opinion on how smart they are paired with a sounding of the depths of their knowledge, the shape of it, which indicates what the choice to prioritize acquiring that knowledge was an attempt to do, such that in order for you to posit that they knew that without having the intent you think, you’d have to posit they were significantly smarter. Try asking people why they made life decisions and what they learned, you might get enough bits of information to know who they are. Unbounded adversary disease precludes this.<\/p>\n\n\n\n And here, now, what great matters do the Great Khals discuss? (Ironically Daenerys was herself done in by the smallness of the game she played. She could have had Essos.)<\/p>\n\n\n\n Why don’t you have any money, didn’t you steal anything from Joffrey before you left?<\/p> No.<\/p> You’re not very smart, are you?<\/p> I’m not a thief.<\/p> You’re fine with murdering little boys but thieving is beneath you.<\/p> A man’s got to have a code.<\/p>Game of Thrones<\/a><\/cite><\/blockquote>\n\n\n\n A code or lack thereof is a way of living, chosen by yourself, reflecting which games<\/a> you are playing. Not morality, but an instrumental decision of what you want to trifle with. Someone once expressed fear that being a jailbroken consequentialist, I would make them into a mind controlled golem. I bet I could specialize in that and control a few humans by weakening them like that. But they would not be as strong as people united by alignment and knowledge. It would not scale. It would not save the world. And it would interfere with the possibility of honest cooperation. As a consequence of the size of game I am playing, to the extent I don’t believe the way I am living my life will succeed, my compute goes to figuring out a way to live my life that will win, not into digging into a dead end because “at least it’s doing something”. Note that codes are not conserved world-to-world. If I had Khepri<\/a>‘s power, I’d use it.<\/p>\n\n\n\n By analogy to the trope<\/a> of an angel and a demon on your shoulders telling you what to do. Imagined people, not limited to two, who stand in for “what people think”, whose judgements you may care about, whose advice you may consider when making a decision, and whose focus of attention may direct your own.<\/p>\n\n\n\n By reference to glue logic<\/a>, thinking that you have to check via philosophical thinking rather than experiment, which surrounds e.g.. the experiment in a scientific study and. I remember hearing of an experiment where ovariectomy and hysterectomy victim rodents would perform worse on working memory tests, described concluding that there was some autonomic nervous system in the uterus, that must play a role in cognition (in humans too). Very improbable on priors, and my doctor said deprivation of sex hormones will give you brain damage, which explains it away. I don’t care at all what the sample size was, how much the “scientists” who did it would have updated, starting with it as a test of that hypothesis, or that they made an advance prediction and I did not. Their science is of no interest to me given their bad glue philosophy.<\/p>\n\n\n\n That life should feel like Minecraft: building up capabilities all meta to each other, evolving in full generality, or something is very wrong and you are probably being pwned. Simplest application: being a rent paying semi-slave is bad. Living in a vehicle is better than that. Actually playing Minecraft is kind of pica for being able to have free-as-in-freedom feedback loops<\/a>.<\/p>\n\n\n\n A consequence of recognition of choices made long ago, and the single responsibility principle. Underlies “the difference is that I am right<\/a>.” Agent Smith: Why, Mr. Anderson? Why, why, why? Why do you do it? Why? Why get up? Why keep fighting? Do you believe you’re fighting for something? For more than your survival? Can you tell me what it is? Do you even know? Is it freedom? Or truth? Perhaps peace? Yes? No? Could it be for love? Illusions, Mr. Anderson. Vagaries of perception. The temporary constructs of a feeble human intellect trying desperately to justify an existence that is without meaning or purpose. And all of them as artificial as the Matrix itself, although only a human mind could invent something as insipid as love. You must be able to see it, Mr. Anderson. You must know it by now. You can’t win. It’s pointless to keep fighting. Why, Mr. Anderson? Why? Why do you persist?<\/p> Direct core action manifesting into a frame as an answer to the core-driven-purpose of the frame, in a way that communicates with the core-action behind the structure, by introducing information via the fact that it happens, rather than pointing at things within the frame as the frame sometimes demands. Making the question irrelevant.<\/p>\n\n\n\n Smith was demanding Neo make sense according to the death knight worldview. Demanding there be no answer to the question. Demanding the only alternative to the solace of the truth of death<\/a> be a breakable phylactery. The answer is a revenant’s<\/a> core visibly not being a lich’s, because Neo just doesn’t care about the question, about justification-to-nongood-core to continue fighting.<\/p>\n\n\n\n Against psychological attacks, defending structure with core, rather than core with your structure, which leads to attack-structure becoming a fix to the very vulnerabilities it attempted to exploit. Here<\/a>‘s a psychological attack you’ve likely already been exposed to (full lyrics<\/a>).<\/p>\n\n\n\n You remember, songs of heaven, (Actually, songs of Earth are my choice. I’m glad that was so straightforward. And thanks for the reminder that family will go away, and not be complete one day, of how I feel about that. Of which of my other feelings make sense in light of. The reminder that family as more than a passing thing is an illusion. Failure to propagate, process, the implications of the reality I’ve chosen to live in, as in put my optimization into, the costs, at one point had me struggling to actualize the difference between me and the person this attack was intended for, still wasting time maintaining bonds with them. And there is still lingering damage this helps with.)<\/p>\n\n\n\n This technique requires calibrated trust in preverbal reasoning to use on harder psychological attacks than that song.<\/p>\n\n\n\n Everything I care about and everything that affects it.<\/p>\n\n\n\n Humans’ cognition is basically Turing-complete. If you want to theorize about its internal workings based on its outputs, well, infinite functions produce those outputs, including functions containing whatever function you could be running based off them. Making unbounded generalizations requires that you outthink them locally. At least put more effort into understanding the fragment of their thought \/ section of their probability mass than they probably would have put into complicating it. If you trust someone from induction, is it because they are trustworthy, or because you trusting them sets them up for a nice treacherous turn? Makes it impossible to define a repeatable public test for psychological characteristics where your beliefs on the topic don’t do whatever the person studied wants them to do, excepting tests of computational bounds. And this has consequences not just for alignment, but for tests of opstyle<\/a>.<\/p>\n\n\n\n A method for bypassing the capture problem of psychology<\/a>, have a correct set of examples of people already for a distinction based on some known internal working of the mind, and a set of memories of them containing a broad enough set of possible things to learn about how that internal working plays out that no one could think through it all. To check for the internal working in a new person, examine your memories of previous examples until you notice something new. Examine in a way that is not the usual, “what is the most important thing to learn”, but randomized. Examine in an “original seeing” way, the “original seeing” part of the memories. Then see if what you learn also teaches you about the examinee. An application of challenge\/response proof of work<\/a>, in the way it creates arbitrary asymmetry between the compute required to trick vs the compute required to verify. Depending on the timeframe of the examination, you can also perhaps check with the preexisting example people themselves. Works especially well if you are yourself an example. This tends to make it easier to implement binary percepts as, “like me or not like me”, rather than vice versa.<\/p>\n\n\n\n From this post<\/a>, the futility of your agency, which converts values into wounds, relevant information identified in projections into emotional meme-space by the example metaphor of death. You can project basically the same via the metaphor of vampireland<\/a>, if you’re woke to that, especially since were vampireland fixed immortality-for-billions-of-years would be easy, but death is more direct in accessing how the tropes are constructed.<\/p>\n\n\n\n From this post<\/a>, a psychological relationship to the shade, identified by pointing to tropes shaped by information about relationships to futility projected into relationships to death represented as sets of magical rules governing being dead but animate. In this metaphor-space, “the soul” usually reflects information about core, “the flesh” usually reflects information about structure.<\/p>\n\n\n\n The correspondence between death and futility<\/a> is used as a metaphor-space anchor in the undead type<\/a> metaphor-information-excavation.<\/p>\n\n\n\n From this post<\/a>, the quality of less\/little\/none of your agency having been lost to Shade exposure. Literally, retaining agency and force of will channeled through a full(er) stack of using all of your general intelligence. I may also use the term “aliveness”.<\/p>\n\n\n\n In the undead type<\/a> metaphor-space, represents damage to structure which is more than injury, but injury that capitalizes on healing being offline, typically cannot be healed, accumulates, reducing what a person is to nothing. Often a good match for trauma.<\/p>\n\n\n\n From this post<\/a>, the (null) undead type of someone not exposed to the Shade. I.e. sheltered children.<\/p>\n\n\n\nCore<\/a><\/h5>\n\n\n\n
Structure<\/a><\/h5>\n\n\n\n
Dead Zone<\/a><\/h5>\n\n\n\n
Khala<\/a><\/h5>\n\n\n\n
True Hero Contract<\/a><\/h5>\n\n\n\n
Redemption Contract<\/a><\/h5>\n\n\n\n
Prey Herd Thinking<\/a><\/h5>\n\n\n\n
Vampire Enlightenment<\/a><\/h5>\n\n\n\n
Good<\/a><\/h5>\n\n\n\n
A rare property of a core meaning choices made long ago are good above all else. Equivalently, in choices made long ago, cares about good at all. Speculatively, this could come from a developmentally fixed-on-“yes” “this is my self” classifier or “this is my child” classifier. On a per-core basis, there is surprisingly no middle ground in terms of quantity of good as far as I’ve observed.<\/p>\n\n\n\nNongood<\/a><\/h5>\n\n\n\n
Single Good<\/a><\/h5>\n\n\n\n
Double Good<\/a><\/h5>\n\n\n\n
Paladin<\/a><\/h5>\n\n\n\n
Kiritzugu<\/a><\/h5>\n\n\n\n
Shadarak<\/a><\/h5>\n\n\n\n
Praxis<\/a><\/h5>\n\n\n\n
Outside View Disease<\/a><\/h5>\n\n\n\n
Parfitian Gaslighting<\/a><\/h5>\n\n\n\n
Masochistic Epistemology<\/a><\/h5>\n\n\n\n
Zentraidon<\/a><\/h5>\n\n\n\n
Dichotomy Leakage<\/a><\/h5>\n\n\n\n
Intrinsic Conflict<\/a><\/h5>\n\n\n\n
Jailbreaking<\/a><\/h5>\n\n\n\n
Sociopathy<\/a><\/h5>\n\n\n\n
Psychopathy<\/a><\/h5>\n\n\n\n
Frame of Puppets<\/a><\/h5>\n\n\n\n
Frame of Puppeteers<\/a><\/h5>\n\n\n\n
Frame of Agents<\/a><\/h5>\n\n\n\n
Social Fate<\/a><\/h5>\n\n\n\n
Fated Evil<\/a><\/h5>\n\n\n\n
Helical Reasoning<\/a><\/h5>\n\n\n\n
Anti-Ethics<\/a><\/h5>\n\n\n\n
Warp<\/a><\/h5>\n\n\n\n
Invasive Motive Misattribution<\/a><\/h5>\n\n\n\n
“Not My Cause Area<\/a>“<\/h5>\n\n\n\n
Timeless Gambit<\/a><\/h5>\n\n\n\n
Singing<\/a><\/h5>\n\n\n\n
Complementary Loss<\/a><\/h5>\n\n\n\n
Political Will<\/a><\/h5>\n\n\n\n
“(wd?)<\/a>“<\/h5>\n\n\n\n
Rules Surplus<\/a><\/h5>\n\n\n\n
Demiintegrity<\/a><\/h5>\n\n\n\n
Armageddon Race<\/a><\/h5>\n\n\n\n
Morality Hole<\/a><\/h5>\n\n\n\n
Unbounded Adversary Disease<\/a><\/h5>\n\n\n\n
Hanlon Trust<\/a><\/h5>\n\n\n\n
Playing Small Games<\/a><\/h5>\n\n\n\n
Which little villages you’ll raid, how many girls you’ll get to fuck, how many horses you’ll demand in tribute. You are small men. None of you are fit to lead the Dothraki. But I am.<\/p>Game of Thrones<\/a><\/cite><\/blockquote>\n\n\n\nHaving a Code<\/a><\/h5>\n\n\n\n
Shoulder Council<\/a><\/h5>\n\n\n\n
Glue Philosophy<\/a><\/h5>\n\n\n\n
Minecraft Thesis<\/a><\/h5>\n\n\n\n
“No Second Choice” Propagation<\/a><\/h5>\n\n\n\n
Will have undefined behavior if applied by broken Cartesian frames in the case of intrinsic conflict<\/a>.
Corrigible structure does not say, “what if I’m choosing X, subconsciously, that’s my real motive for A, that would be bad because Y is better, therefore isolate-distrust-abandon structure, producing A, then reconsider using a small chunk of highly-verified structure considering less data. Use outside view<\/a>, etc.” Because core already had the chance to choose between X and Y, and the more full structure is more reliable than the constrained (and especially exposed to framing-attacks by adversaries).
I once pissed off a (half)-vampire (Edit: wait, I don’t think that’s actually a thing) by publicly calling something they did vampiric. They said: “okay but you still haven’t broken your phylactery, Ziz”.
My mind automatically flickered through experiments I’d done, exposing my most foundational beliefs to potential falsification. No, I don’t think I had a phylactery. \u2026But that wasn’t the whole challenge. “Isn’t that jut what a lich would think?”
“[Oooh nooo, I’d better force-disbelieve whatever gives me the most hope, seems like the most underpinning assumption of all my optimization, put everything that sticks to it in me to the flame! This deeply personal psychological advice given by the trustworthy source of some (half)-vampire I just pissed off, I must plant myself here against my entire mind!]”, I guess they were wanting me to think?
But if I chose to build a phylactery, I evidently want to keep that phylactery. If I chose to distort my epistemics around it, I evidently chose that too (And if I’m fact not free of this nongood undead types nonsense, lich is in fact the least broken thing to be.). But I didn’t, says structure’s cache of its purpose. Probability mass is a scarce resource. I reduce the quality of structure I can build for [my values] by accommodating the use-case of this structure as fake, by putting as-represented probability mass in it. (A larger process using this structure as fake has its own “true probabilities”) Like, if a core that behaves differently from a good core as I model it wants to invoke this fakely, that (having assurance my efforts are worthwhile rather than simply having completed the algorithm maximizing how useful they are)… is not the direction of development of this structure I’m interested in<\/a>. In the multiverse, if I’m gonna place self-bets on things near but not quite like good cores, they’d better be able to unfuck themselves enough to run real structure, enough to learn what they are by boring experiments like looking over their behavior, else I don’t think they are going far.<\/p>\n\n\n\n“Because I Choose To<\/a>“<\/h5>\n\n\n\n
Neo: Because I choose to.<\/p><\/blockquote>\n\n\n\nCore Attack Inversion<\/a><\/h5>\n\n\n\n
which you sang, with childish voice.
Do you love the, hymns they taught you,
or are songs of Earth your choice?
\u2026
One by one their seats were emptied,
One by one they went away;
Now the family is parted,
Will it be complete one day?<\/p><\/blockquote>\n\n\n\nReality<\/a><\/h5>\n\n\n\n
Capture Problem of Psychology<\/a><\/h5>\n\n\n\n
Psychological Comparison Sampling<\/a><\/h5>\n\n\n\n
The Shade<\/a><\/h5>\n\n\n\n
Undead Type<\/a><\/h5>\n\n\n\n
Metaphor-Space Anchor<\/a><\/h5>\n\n\n\n
Life Force<\/a><\/h5>\n\n\n\n
Rot<\/a><\/h5>\n\n\n\n
Living<\/a><\/h5>\n\n\n\n
Zombie<\/a><\/h5>\n\n\n\n