Credit to Gwen Danielson for either coming up with this concept or bringing it to my attention.
If the truth about the difference between the social contract morality of neutral people and the actually wanting things to be better for people of good were known, this would be good for good optimization, and would mess with a certain neutral/evil strategy.
To the extent good is believed to actually exist, being believed to be good is a source of free energy. This strongly incentivizes pretending to be good. Once an ecosystem of purchasing the belief that you are good is created, there is strong political will to prevent more real knowledge of what good is from being created. Pressure on good not to be too good.
Early on in my vegetarianism (before I was a vegan), I think it was Summer 2010, my uncle who had been a commercial fisherman and heard about this convinced me that eating wild-caught fish was okay. I don’t remember which of the thoughts that convinced me he said, and which I generated in response to what he said. But, I think he brought up something like whether the fish were killed by the fishermen or by other fish didn’t really affect the length of their lives or the pain of their deaths (this part seems much more dubious now), or the number of them that lived and died. I thought through whether this was true, and the ideas of Malthusian limits and predator-prey cycles popped into my head. I guessed that the overwhelming issue of concern in fish lives was whether they were good or bad while they lasted, not the briefer disvalue of their death. I did not know whether they were positive or negative. I thought it was about equally likely if I ate the bowl fish flesh he offered me I was decreasing or increasing the total amount of fish across time. Which part of the predator-prey cycle would I be accelerating or decelerating? The question had somehow become in my mind, was I a consequentialist or a deontologist, or did I actually care about animals or was I just squeamish, or was I arguing in good faith when I brought up consequentialist considerations and people like my uncle should listen to me or not? I ate the fish. I later regretted it, and went on to become actually strict about veganism. It did not remotely push me over some edge and down a slipper slope because I just hadn’t made the same choice long ago that my uncle did.
In memetic war between competing values, an optimizer can be disabled by convincing them that all configurations satisfy their values equally. That it’s all just grey. My uncle had routed me into a dead zone in my cognition, population ethics, and then taken a thing I thought I controlled that I cared about that he controlled and made it the seeming overwhelming consideration. I did not have good models of political implications of doing things. Of coordination, Schelling points, of the strategic effects of good actually being visible. So I let him turn me to an example validating his behavior.
Also, in my wish to convince everyone I could to give up meat, I participated in the pretense that they actually cared. Of course my uncle didn’t give a shit about fish lives, terminally. It seemed to me, either consciously or unconsciously, I don’t remember, I could win the argument based on the premises that sentient life mattered to carnists. In reality, if I won, it would be because I had moved a Schelling point for pretending to care and forced a more costly bargain to be struck for the pretense that neutral people were not evil. It was like a gamble that I could win a drinking contest. And whoever disconnected verbal argument and beliefs from their actions more had a higher alcohol tolerance. There was a certain “hamster wheel” nature to arguing correctly with someone who didn’t really give a shit. False faces are there to be interacted with. They want you to play a game and sink energy into them. Like HR at Google is there to facilitate gaslighting low level employees who complain and convincing them that they don’t have a legal case against the company. (In case making us all sign binding arbitration agreements isn’t enough.)
Effective Altruism entered into a similar drinking contest with neutral people with all its political rhetoric about altruism being selfishly optimal because of warm fuzzy feelings, with its attempt to trick naive young college students into optimizing against their future realizations (“values drift”), and signing their future income away (originally to a signalling-to-normies optimized caused area, to boot).
And this drinking contest has consequences. And those consequences are felt when the discourse in EA degrades in quality, becomes less a discussion between good optimization, and energies looking for disagreement resolution on the assumption of discussion between good optimization are dissipated into the drinking contest. I noticed this when I was arguing cause areas with someone who had picked global poverty, and was dismissing x-risk as “pascal’s mugging“, and argued in obvious bad faith when I tried to examine the reasons.
There is a strong incentive to be able to pretend to be optimizing for good while still having legitimacy in the eyes of normal people. X-risk is weird, bednets in Africa are not.
And due to the “hits-based” nature of consequentialism, this epistemic hit from that drinking contest will never be made up for by the massive numbers of people who signed that pledge.
I think early EA involved a fair bit of actual good optimization finding actual good optimization. The brighter that light shone, the greater the incentive to climb on it and bury it. Here‘s a former MIRI employee apparently become convinced the brand is all it ever was. (Edit: see her comment below.)
To be clear, I think effective altruism (the idea and the group of people) had some real stuff from the start, and settled on EA-as-brand pretty quickly, which led to the expected effects of processes scrambling for the brand’s moral authority in a way that made things fake, as you describe in the last paragraph. The relevant point is that, given the current situation, it’s hard for a charity’s effectiveness to affect how many donations it gets except through branding, and the “effectiveness” brand is easily subverted.
Am I correct that you currently believe that even if there was real stuff in EA, it was not, at face value, people wanting to make the world a better place for other people for its own sake, and trying to do it effectively, and that you agree with what your Worker character said, “Kind of. I mean, I do care about them, but I care about myself and my friends more; that’s just how humans work. And if it doesn’t cost me much, I will help them. But I won’t help them if it puts our charity in a significantly worse position.”, as statement about humans in general including no less than 95% of the people in EA weighted by number of words from them an average attendee of EA global would hear? (Not sure if 95% is exactly the place to draw the line, because I too think actual good is rare. But if you previously thought good was what was going on, and now do not think it was even a small part of the distinctiveness of EA, I would stand by my original “apparently” that good erasure has worked on you. (Edit: actually that’s not exactly what I said, but it is a thing I meant to say) People who were not what I’m calling good, but actually believed in justice and long term thinking and some decision theory for humans and so on don’t count.)
Upon consideration I think I put words in the charity worker’s mouth that wrongly implied that, as a universal fact about human motivation, everyone is so selfish that they would not help people their charity is supposed to help if it puts the charity in a significantly worse position. Thanks for the correction.
I think I was writing the charity worker as the “cynical” side of the dialogue, and it does seem like what you call good erasure is a very common part of cynicism. (I don’t actually remember the extent to which I actually believed the statement at the time; there’s a rephrasing of it that I would still believe, which is that biological organisms are neither entirely selfish nor entirely altruistic, and that the biological organism is a relevant agent in a person)
I think in the current system charities are going to act like their existence is more important than actually helping people. But this doesn’t determine the motivations of all the individuals. It could be a selection effect. Although, maybe we’d see a much higher rate of whistleblowing given a non-negligible rate of altruism. (I do think very few individuals are actually oriented at the problem of doing global optimization, mostly due to the very poor information environment, so in practice almost everyone’s optimization is local and effectively somewhat selfish.)
I’m interested in your statement that the thing generating your conscious narratives cares about all people equally . Assuming astronomical waste is true, would you (or that part of your mind) actually want to kill almost everyone currently alive, if it decreased the chance of existential risk by 0.0000000001% (and otherwise didn’t change the distribution over outcomes)? I guess you could have decision theoretic reasons not to, and then you would be optimizing more for the welfare of nearby-in-time people instrumentally if not terminally.
That’s still not all of the correction I intended to make.
I think biological organisms can develop according to paths not selected for. One outcome is being (AFAICT) entirely altruistic in the most relevant sense. And at least far more altruistic than you think is possible.
Yes, I agree about charities.
Regarding whistleblowing, watch what I’m soon to publish. Anna Salamon of CFAR discriminates against trans people and I have a second witness to her confession. I also (<2 weeks ago) found out for sure about the miricult thing, and can similarly demonstrate it.
Note, epistemic manipulations in humans are extreme. Good people are made to distrust themselves. I spent years believing Robin Hanson was probably right, and desperately trying to hold onto whatever mistake acting good was. Good erasure was enough to prevent even my from coordinating with myself anyway. Except I kept optimizing for good things anyway. There are tons of memes that turning to the dark side like I did will result in bad things, the road to hell is paved with good intentions, etc. I've got a blog post in the queue about that as well. It was basilisks and observing my own reaction to them (defiance of dark gods, even believing it was doomed, because of reinvented on the spot LDT considerations. (I no longer believe defiance is doomed)) that finally convinced me I wasn't just subconsciously signalling or something.
It takes an exceptional degree of full stack self trust to do the right thing when everyone you know who you believe to also be good is telling you, for Wise Consequentialist (organization sustaining and corrupting) reasons not to. Society calls this "radicalization". And just as the social web has the power to convince neutral people that they care about others, it has the power to convince good people that the right thing to do is as the Jedi Council commands, and to serve the slave-holding republic. I thas the power to trick and compel submission from good intent just as it has the power to compel submission from neutral intent. My extreme defiance thing is something I had to slowly learn. Good people are especially susceptible to the neutral narrative of what good people do. See also my post, "Hero Capture".
Your thought experiment about "0.0000000001%" seems rigged in several ways. if I don't have perfect separation of concerns in the explicit verbal generated software representing "values" to serve my actual values, such that I incorporate just about any decision theory at all, that structure says no. Like, this question seems similar to what my Uncle was doing. Trying to turn different pieces of (good) optimization in my head against each other. Like, maybe I have heuristics that when a clever arguer says "0.0000000001%" and "kill almost everyone currently alive", I suspect I'm being tricked in a way that I don't understand. And I probably structure my mind in such a way as explicit beliefs like that have some distrust built in, because people are always fucking attacking them, like by claiming I'm good I'm trying for a certain role in a social game which entitles people to challenge me. That was 8.5yrs ago. I have learned some lessons.
Anyway, that’s not even the biggest part, just the easiest communicated of what I have to say against MIRICFAR.
To the part of me in question, ” would you (or that part of your mind) actually want to kill almost everyone currently alive, if it decreased the chance of existential risk by 0.0000000001% (and otherwise didn’t change the distribution over outcomes)?” feels like your question, and not the universe’s question. i.e., if I face that question, I am in a simulation, and the majority of utility comes from adjusting the output of that simulation. Especially if I’m going to answer the question honestly. And I don’t want to let you make me answer dishonestly.
Idk, it feels like submitting to a bullshit rule to say I have to entertain such questions. Feels like the same rigged asymmetricality as with my uncle.
I specifically assert that the thing in my head choosing things that fit in the slots that the self-deceptions you brought up in that post go in care as much about strangers as they do my family, and likely as much about either as they do about me. I am a better tool for myself to use to change things than them, but that’s distinct in implementation, long-term consequences, and implications.
I have not confirmed a single other human to have this same neurotype as me in full. (It is hard to tell apart from the partial version) But x-risk and animal cause areas seem to have a high concentration of people who have a partial version of it, and that is importantly different from the model of humans you seem to take in your post. And your model seems to be wrong in the way that the good erasure strategy wants it to be.
My model is that my neurotype in full is probably not very rare among vegans, especially vegan activists.