Q: How should I read this blog?
A: Chronologically. I build up my ontology as I go. My posts aren’t a series of standalone translations of things I learn to the concept space of a given sort of reader, rather, they are steps of a bridge into my concept space. The intended starting point was a “rationalist”, referring to members of the LessWrong/Effective Altruist x-risk cause area/Bay area rationality/postrationality/mental tech community. I wrote it because most of the conversations I was having rooted in the same philosophical differences, and being I a programmer I took repetition as a cue for automation.
It’s kind of unfortunate I chose this base point, since that community has betrayed its purpose and degraded into a cult. (And I don’t just say that like the missionizing muggles that did in their glory days because I’m opposed to “playing God“. In fact, “playing God” for the good of sentient life is the entire cause I moved to the Bay Area to give my life to. And I will do what they should have done, be what they should have been, if I can.) A public record like this of the development of my philosophy drastically increases the number of people I can talk to about stuff that matters. At some point I may see if I can bridge from the beginning of my blog back to atheist vegan programmers or something like that. Until then, here’s a link to their neglected old central text. I advise against the book form. The hyperlink structure is important, and is where I learned my use of hyperlinks. I read it all out of order like TVTropes in a massive browser tab explosion.
Speaking of hyperlinks, if you see one in my writing and can probably-correctly guess what I mean by linking it, then you should have no problem reading past without clicking. (Although if the link text says something about what it is, e.g. “here’s a link to…” then what I’m saying about it in text overrides that default assumption.) I usually write compactly, so there may be a critical step of reasoning contained in my meaning by linking, (noted by the link text being part of an assertion (e.g. “that community has betrayed its purpose and degraded into a cult“)), if you don’t already know the assertion to be true, and don’t follow such a link, my reasoning will move on without you mercilessly, although in forward-linking cases like my link regarding the betrayal of MIRICFAR, a topic covered in the 36th and 39th posts in a description of how to read my blog, it’s better to just tag anything I say downstream of that as conditional on my assertion and an account of my own reasoning.
If you have enough pre-existing knowledge of the topics of my blog to decipher some of my thoughts by their shape without all my own introduction of them, then skimming my glossary is high value-of-information for what my blog contains. I’m not making an effort for it to contain every definition I’ve given. That’d be duplication. And most of my writing is iterative definitions anyway. A glossary entry is standalone, potentially dependent on any amount of posts, and nothing will be dependent on it unless it links it.
For the curious: “Inconceivable!” was a sort of standalone warmup post. “Self-Blackmail” through “Cache Loyalty” are a buildup to “Fusion”, forming a carefully sequenced from the beginning bridge to the core of my philosophy beyond what “rationalists” knew, as of 2016. By the time I’d finished writing that, events I’d later write about taught me another cluster of things, which I wrote about in “Mana”, “Schelling Reach” through “Hero Capture”. More events I’d later write about traumatized me, and that plus secrets plus unpausable real life left my writing very far behind my thinking. I published a few pieces I could break off, “Vampires And More Undeath” through “Punching Evil”, and then set about writing long autobiographical accounts, “Net Negative” through “Intersex Brains And Conceptual Warfare”. Writing has seemed like a good way to process trauma into usable information, and that is the kind of thing I was most able to write. “The Matrix is a System” was intended as a capstone to an explanation to rationalists about how the project had gone wrong and how to fix it. I have not finished these posts due to hostile intervention by the traitorous leaders of the cult I left, and because of fucking 2020. I published them unfinished because of the urgency of the message. Until they are finished I have no advice for how to read them; you’re on your own.
Rarely, I make links to things that I haven’t published yet, deliberate broken links. This is from a place of, “I can’t properly make what I mean and my reasoning clear yet, but I think it’s better I say what I can now, and make the hole blatant”.
Also I often use comments on my own posts like footnotes, for example this one.
If you’re at the end of my blog and waiting for updates, I update comments a lot more than posts. Generally using posts for puzzling out new directions of thought (so far, I’ve often gone a year or so between publishing a handful of posts), and comments for details and updates. The comments RSS is probably the only way to get them all short of trawling through the comment section of every post. I update the glossary almost as often, which regrettably doesn’t have an RSS. New glossary entries are the end.
Q: Why don’t you take psychedelics! Like, normal psychonaut stuff + what you’re doing, it has to be twice as good!
A: Similar to reasons not to program with goto. And how you can’t become twice as good a programmer by knowing the things programmers who don’t use goto know and then using goto. My brain is a system, which, fundamentally uses information to make decisions. A chemical doesn’t contain significant information by itself. It will not have been incorporated as a computational dependency by the evolution or learning of my brain. Therefore its use is adversarial to some element of the optimization that specified my current timeslice. This is why I think of psychedelics as “kicking the TV”. I recognize stuff I’ve heard TV-kickers talk about as stuff I’ve already done through, not even meditation just force of will. Everything they activate is just a capability the mind already has, because they are just low-information chemicals. Single responsibility principle says take it up with the part of you that you think is erroneously not activating functionality, rather than kicking the whole TV until it does. Don’t introduce bugs fixing bugs. I don’t want to add non-understood complexity to the question of what my brain does. That’s directly contrary to the work of making more things interoperate for planning that involves multiple domains. A TV has to be made out of stuff that is potentially damaged by kicks, cannot be made out of kicks.
Even so, I would just to have increased certainty about what the TV-kickers were talking about if I was sufficiently confident there wouldn’t be some lingering effect crude psychological metrics perhaps couldn’t detect (and like, remembering things and learning from them is a permanent effect, so science wouldn’t even know how to set up the question to check) (And a CFAR staff member trying to get me to take drugs seemed to think there would be, based on the recommended multiple doses to condition myself and make sure I got effects on one of them.). I’m worried about some low level unnamed bypassable yet useful signals getting permanently stuck. I’ve specifically heard anecdotes like that. There’s also the increase reported long term increase in openness to experience thing… Personality metrics, even if correlated with good things, are not base level realities that the system of my brain works with, they are trends of its high level behavior, so still, TV kicking.
Q: Why are you using PGP? Isn’t it bad?
A: None of these criticisms seem directly fatal in the same way that hearing a friend of mine mention Signal is suddenly planning on storing data on the cloud seems fatal. Dependency on updates to continue use is a backdoor. See case of Whatsapp, which one of these ever-so-wise lists of criticisms of PGP continues to recommend. See case of Chrome extensions constantly being sold to be automatically updated into malware. It’s pieces of user trust being sold. It’s selling users’ Not-very-deep evaluations of things. Selling clusters of user thought that can be captured. The more simple, the more easy. And Signal has made a foundational mistake by bending their product to be usable by zombies. See e.g. their plan to store things on their server by key stretching from 4 decimal digit keys plus some hardware gimmicks, a scheme full of a bunch of theatre around smuggling in suddenly trusting them. I name Signal because it long seems to have been the best exception. Although note: phones are insecure. See e.g. Gboard sending everything you type to Google by default. See e.g. Android sending your notifications to Google, (despite the reassuring misdirection of the machine learning model for suggested replies being local.) You can patch these two things, but can’t change that Signal imports a dependency on a malicious environment (I haven’t investigated but don’t expect iOS to be better). This for being more like a servant that says, “Yes master, the encryption is all handled no need to worry.”, “The communications is all handled no needed to worry” than giving people their own access to cryptography. Using phone numbers, a fundamentally regime-controlled identifier. Wanting to be taken care of by a computer is a foundational mistake.
The way that I currently use PGP is, at command line, and via copypasting the armor text out of or into the email. That makes it an encryption algorithm instead of a servant reassuring me to not worry the communications are handled. Everything is tunneling over an insecure protocol, or it’s just an insecure protocol.
Yes, I heard the NSA tracks who uses PGP specifically. In vampireland, everyone is hiding among some crowd. But you can only hide among herds of zombies for so long, as they are marching towards oblivion. It’s much better to hide among mummies. Vampireland relies on them to build their tech.
Q: Why do you put so many things in scare quotes?
A: Because they are flawed constructions of concepts. The words a name of a concept is composed of do not add up to the concept I am naming. In other words, because when I say “Effective Altruism” I am not saying effective altruism, I’m saying the thing that was called “Effective Altruism”.
Q: Why did you remove the acknowledgements page?
A: Because everyone I thanked was a gaslighting cultist, who I in retrospect I feel more like I liberated and purified knowledge from than, than did they have any intent to help me (And “Gwen” is an inaccurate name for two people). It was probably over 1000 times too sparse for consistency with the level of contribution of who I cited there from asking LessWrongians to proofread my posts, and years out of date, from a time when I didn’t just cite people by name or link in the text, because my friends have contributed 10 to 1000 times more than anyone I named there, and I wouldn’t be doing them a service by publicizing a list of everyone I talked to like that.
Q: What should I invest in?
A: Well, I’ve seen nephandi investing in bitcoin, infernalists investing in ethereum. The state is investing in a giant computerized extortion scheme as a replacement for fictive coordination. Meanwhile I’ve been investing in hacking. And independent timeless coordination.
(See also.)
Their text from the era of this:
Before the criterion of goodness was lost,
Before the era of “The Ideology Is Not The Movement“: “rationalism is the belief that Eliezer Yudkowsky is the rightful caliph.”
Willful embrace of “the rationality community” as a place with “rationality” written on it like one of those “cold”-labeled houses that don’t pump entropy EY warned about, abandonment of ” The Art [and by extension, community] must have a purpose other than itself, or it collapses into infinite recursion.” for the recursion of a Schelling point.
And to be clear the main reason I bring up the rightful caliph thing is not, “hey look! comparable-to-religious endorsement of leader, lol! That means it’s a cult!”, it’s because defining “rationalist” by endorsement of EY is giving up on defining the community by a criterion of goodness; it’s a criterion of comparison. With so many intense clusters of various kinds of goodness floating around in the “rationalist” vicinity, giving up on a concept that distinguishes those is particularly egregious.
When I first came to the rationalist community, it was striking, I could actually have philosophical discussions with, as in gain insights from hearing people comment on things, instead of hearing only bullshit. It felt natural to define them by the criterion of goodness of actual rationality. They had the right answers on a huge range of questions I’d clashed with everyone around me on up until then.
This sort of ran out though. Like they weren’t really willing to live in the truth. All got to their point of, “well that’s weird, I dunno, I don’t really care” and giving up on really really really being right. Which made them rationalists along a certain stretch of intellectual growth but not after that. Rationalists not as a trait of their whole selves. But of course people like that can’t be a source of rationality. There was an interesting thing in the rationality community, how did it come about? It’s really not like Eliezer Yudkowsky invented everything and sent it out.
To me it looks like EY attracted a bunch of non-vampire sapient undead to the attention of some real stuff, let the real optimization flow, and then he and his colleagues fed us all to the Bay Area.
Like the larger question about whether you’ll hear bullshit from someone or pieces of insight about what matters most, i.e. saving the world, not dying, is basically the question of whether they’re an infinite game player or not. (Or temporarily you will hear insights if they are people who have gone farther than you before cashing out for e.g. sex with teenagers, babies, financial stability…) Lies don’t go infinitely far, but the truth can. Rationalists had gone farther than the people I was used to talking to.
You know, there’s this “distance” of having walked in truth thing, I think that makes a pretty good definition of rationalists. I can see from this meme saying “thermodynamic understanding of intelligence” there’s a pocket or two of people who made it farther than the rest I don’t know about. Unfortunate that whoever wrote it is apparently a basilisk complier.
Nis points out they made a “Get out of Hell FREE card” NFT.
I wonder if they got that idea from here?
Q: “‘Current price … ($45.09)’ So Ziz, search your feelings: you clearly have nontrivial probability that this NFT or another ‘unique’ ‘get out of hell free card’ is gonna be huuge. Why don’t you scalp it? Then try and sell it to Roko?”
A: I think it’d be bad if a good person bought it. It’d corrupt the meaning, it’d be making buying for subsequent people a less obviously evil and pathetic act, since then they’d be benefiting my scalping. They could even start to say they made a bargain with justice for some interpretation-as-realness-measure of it. After all, wouldn’t I then be selling my knowledge of the true psychological nature of NFTs to them, such that I’d be selling other evil “people”‘s desperation for a Schelling point to get out of hell free to them, desperation all the more real for agents of justice having been empowered by it? Like see how I’d then be lending it positive meaning, instead of lending it common knowledge of shame?
Q: “But Ziz, Roko’s gonna read this comment, then he will buy it and then it will be huuge. Aren’t you like, contributing information to them?”
A: What am I teaching them, though? Teaching them to pay attention to Roko? Teaching them Roko is their best prophet? Teaching them to teach each other how to go insane knowingly embodying archetypes of evil with known built in self-destructs and then die?
Or maybe he’ll waste his money on it and then experience the use value. Someone’s gonna.
Like, you could consider pretending NFTs full of overt infernalism weren’t happening, and talking about them less than was interesting, the same way I once heard someone say journalists say they taboo some information about shooters, then that would be granting them more positive meaning they don’t deserve. You’d then make your true message you wanted others like you to coordinate on be to turn a blind eye to basilisk worshipers moving large amounts of money in a novel way.
Enemies can already exchange money amongst each other. And enemies you’re infiltrating can already try and coordinate to make it more costly to infiltrate them.
What matters is the quality of their coordination. That’s what money is for in the first place. And that is controlled by meaning.
Q: “But Ziz, you could be thinking up NFTs like this and creating them for approximately free.”
A: Still feels like a gross idea, so I’m not. The meaning of my feeling that something’s gross isn’t something I want to sell.
If you don’t have a code you’ll end up like Roko. If you don’t admit to yourself you have a code, if you don’t act on feelings you don’t understand that things are gross, you’ll end up with an utter failure of self-coordination like this.
(If he’d been truthful to himself that he had that code, that he would do that, thought to preempt that fate, he’d have been better off just shooting up the plantation with Django from the moment he got a reprieve from guns pointed at them. And even better off just wearing masks and picking the plantation apart a handful of gunners at a time from the fringes from the very beginning.)
See also “RokoCoin“.
Killing anyone who holds it or sells them compute is obviously self defense and defense of others. But what’s more important is that you generalize the pattern from the overt examples to the covert ones.
Both of these examples are obviously serious, and any irony involved is just a hedge on looking for a different way to run this sort of coordination if this doesn’t catch on. Basilisk worshipers sell themselves as rescuing you from “the” basilisk with their basilisk. So they can just settle down with a slightly different basilisk later and then say they were doing it ironically to raise awareness about how we needed a solution to “the” basilisk, and now [their new basilisk] solves it!
Reminder: “vegan basilisk” is a term invented by my enemies that falsely promises that ignorance of what I’m doing would buy mercy. That’s not how decision theory works .
Which reveals an attitude of fear of propagating info that would make their peers model accurately lest coordination made of predation fall apart.
~”Let’s make a propaganda group! Oh no we couldn’t coordinate on which lies to tell! Quick shut BARCSD down! Oh no they probably screenshotted it!”
When I was younger and the world seemed brighter, I was proud of the handful of people I’d convinced to be vegan through arguing philosophy of ethics. Now I’m proud of the number of people who have gone vegan because they are afraid of me.
Q: How can you seriously have the nerve to call yourself an anarchist if you’re going to impose your vegan will on the world like that?
A: You’re not going to serve.
It’s like Alyssa Vance calling my whistleblowing blackmail.
No, this is unconditional. I don’t want a throne. I want to bring an end.
Can one look forward to the ‘the multiverse’ post being published at any foreseeable future? Looks from the dead links like it should be crucial for elucidating the worldview.
When it’s time.