Hero Capture

Epistemic status: corrections in comments.

Neutral people sometimes take the job of hero.

It is a job, because it is a role taken on for payment.

Everyone’s mind is structured throughout runtime according to an adequacy frontier in achievement of values / control of mind. This makes relative distributions of control in their mind efficient relative to epistemics of the cognitive processes that control them. Seeing what thing a conservation law for which is obeyed in marginal changes to control is seeing someone’s true values. My guesses as to most common true biggest  values are probably “continue life” and “be loved/be worthy of love”. (Edit: currently I think this is wrong, see comment.) Good is also around. It’s a bit more rare.

Neutral people can feel compassion. That subagent has a limited pool of internal credit though; more seeming usefulness to selfish ends must flow out than visibly necessary effort goes in, or it will be reinforced away.

The social hero employment contract is this:

The hero is the Schelling person to engage in danger on behalf of the tribe. The hero is the Schelling person to lead.
The hero is considered highly desirable.

For men this can be a successful evolutionary strategy.

For a good-aligned trans woman who is dysphoric and preoccupied with world-optimization to the point of practical asexuality, when the set of sentient beings is bigger than the tribe, it’s not very useful. (leadership is overrated too.)

Alive good people who act like heroes are superstimulus to hero-worship instincts.

Within the collection of adequacy frontiers making up a society created by competing selfish values, a good person is a source of free energy.

When there is a source of free energy, someone will build a fence around it, and are incentivized to spend as much energy fighting for it as they will get out of it. In the case of captured good people, this can be quite a lot.

The most effective good person capture is done in a way that harnesses, rather than contains, the strongest forces in their mind.

This is not that difficult. Good people want to make things better for people. You just have to get them focused on you. So it’s a matter of sticking them with tunnel-vision. Disabling their ability to take a step back and think about the larger picture.

I once spent probably more than 1 week total, probably less than 3, Trying to rescue someone from a set of memes about transness, that seemed both false and to be ruining their life. I didn’t previously know them. I didn’t like them. They took out their pain on me. And yet, I was the perfect person to help them! I was trans! I had uncommonly good epistemology in the face of politics! I had a comparative advantage in suffering, and I explicitly used that as a heuristic. (I still do to an extent. It’s not wrong.) I could see them suffering, and I rationalized up some reasons that helping this one person right in front of me was a <mumble> use of my time. Something something, community members should help each other, I can’t be a fully brutal consequentialist I’m still a human, something something good way to make long term allies, something something educational…

My co-founder in Rationalist Fleet attracted a couple of be-loved-values people, who managed to convince her that their mental problems were worth fixing, and they each began to devour as much of her time as they could get. To have a mother-hero-therapist-hopefully-lover. To have her forever.

Fake belief in the cause is a common tool here. Exaggerated enthusiasm. Insertion of high praise for the target into an ontology that slightly rounds them to someone who has responsibilities. Someone who wants to save the world must not take this as a credible promise that such a person will do real work.

That leads to desire routing through “be seen as helpful”, sort of “be helpful”, sort of sort of “try and do the thing”. It cannot do steering computation.

“Hero” is itself such a rigged concept. A hero is an exemplar of a culture. They do what is right according to a social reality.

To be a mind undivided by akrasia-protecting-selfishness-from-light-side-memes, is by default to be pwned by light side memes.

Superman is an example of this. He fights crime instead of wars because that makes him safe from the perspective of the reader. There are no tricky judgements for him to make, where the social reality could waver from one reader to the next, from one time to the next. Someone who just did what was actually right would not be so universally popular among normal people. Those tails come apart.

Check out the etymology of “Honorable”. It’s an “achievement” unlocked by whim of social reality.  And revoked when that incentive makes sense.

The end state of all this is to be leading an effective altruism organization you created, surrounded by so dedicated people who work so hard to implement your vision so faithfully, and who look to you eagerly for where you will go next, yet you know on some level the whole thing seems to be kept in motion by you. If you left, it would probably fall apart or slowly wind down and settle to a husk of its former self. You can’t let them down. They want to be given a way for their lives to be meaningful and be deservedly loved in return. And it’s kind of a miracle you got this far. You’re not that special, survivorship bias etc. You had a bold idea at the beginning, and it’s not totally been falsified. You can still rescue it. And you are definitely contributing to good outcomes in the world. Most people don’t do this well. You owe it to them to fulfill the meaning that you gave their lives…

And so you have made your last hard pivot, and decay from agent into maintainer of a game that is a garden. You will make everyone around you grow into the best person they can be (they’re kind of stuck, but look how much they’ve progressed!). You will have an abundance of levers to push on to receive a real reward in terms of making people’s lives better and keeping the organization moving forward and generating meaning, which will leave you just enough time to tend to the emotions of your flock.

The world will still burn.

Stepping out of the game you’ve created has been optimized to be unthinkable. Like walking away from your own child. Or like walking away from your religion, except that your god is still real. But heaven minus hell is smaller than some vast differences beyond, that you cannot fix with a horde of children hanging onto you who need you to think they are helping and need your mission to be something they can understand.

12 thoughts on “Hero Capture”

  1. It’s so easy to fall into the trap of the hero/lover/beloved role. I know that because I’ve personally succumbed to it myself. Initial feelings of loneliness fed into a desire for comnection, which fed into obsession and more loneliness. I was able to break it only because i actually developed relations and discovered the seamy underside: that all relations diverge, that competition for power is the name of the game, not love and that in the end, the people you like may not be as best-suited for you as you’d like.

  2. I need to concentrate and examine my own experience to get most of the concepts/distinctions you make in your blog, and there’s still a ton for me to learn here. I really value and appreciate you writing it. Thank you!

  3. It has been half a year ago now since your last post, is everything okay? Big project going on, or has it just not felt rewarding enough?

    I for one miss your insightful writing, hope you’ll come back and share more of your thoughts.

  4. You’ve portrayed heroes as being heroes relative to societal terms – that they are only heroes because society as a whole considers them to be, and they act in accordance with particular social norms for how heroes should behave. But it seems to be that it is inappropriate to call these people heroes, because what they’re engaging in is essentially mercenary work, to be paid for in social credit or desirability, rather than “true” heroism.

    I think that a “true” hero (as distinct from the “hero” that is bound by society that you have described in this post) should act based on an objective moral standard – that they should be a kind of ultra-extremist that does what is right regardless of what society thinks about it. Concerned not with fulfilling materialistic impulses (such as being seen as desirable, or not wanting to abandon their friends) but single-mindedly marching towards an immutable ideal. More like an ASI than a human being, in terms of how blind and focused its actions are towards a single goal.

    Do you think that:

    1) “True heroes” as opposed to heroes as defined by society can actually exist?
    2) If they do exist, do you agree that these “true heroes” would not be vulnerable to the hero capture methods described in this post, because they become invulnerable to the mental and social pressures that keep them staying in a stagnant little garden – to use your metaphor, because they are willing to abandon their own child to save the world? (If your knee jerk response to this is “that’s impossible, nobody can be like that in real life, it only exists in anime” then go ahead and answer “no” to question 1.)
    3) People who intend to do good should strive to make themselves into true heroes?

    1. 1) Yes. In fact, my former startup cofounders have called me those exact words, “true hero” and “slow fooming FAI”. The singlemindedness, the extremism, the objective moral standard, the disregarding what society thinks, the leaving my family behind (I don’t have children). I pursue vengeance upon the Shade; defined as the way of being of the universe such that bad things like death will happen and no one can do anything about them, with the sort of relentless absolute determination of a revenant. Or a Sith.

      If you find any others, I would like to talk to them.

      2) The “hero capture” I’m interested in is the kind that manages to grab true heroes. Human epistemology is not a “secure computer system”. I don’t think any epistemology can be. People trying really hard to adjust your estimates of how useful working with them are relative to accomplishing goals towards an absolute objective ideal can often to some extent succeed no matter how virtuous your choices made long ago.

      Although. When I wrote this I was considering EA org leaders who never seemed to move on after their premise had been falsified, the startup energy and aliveness waning away as their ability to see the possibility of true optimization waned to be an example of this. And there is nuance to how “good” appears in most people, not me, that I didn’t understand when wrote these posts, they are, as best I’ve been able to examine individuals in that class, probably not the same kind of thing I am, such that they are sort of just choosing not to leave their social groups behind for normal human reasons. And that is still behaving “supererogatory”, so don’t consider it misbehavior, but it’s not my sort of determination.

      3) Yes.

      1. Apologies for the late reply.

        It seems to me like the hero capture you describe – namely, hollow shells trying to convince a purehearted person that working with them/helping them/doing things for them is the best way to fulfill the purehearted person’s ideals – can be trivially evaded in one of two ways.

        1) Nonstandard or neurodivergent thought processes. The process which you describe as hero capture involves the manipulation of a hero’s thought process to convince them to harness their energy in service of the capturer. Thought processes cannot be manipulated if they cannot be understood or seem alien or insane.

        Artificial superintelligences, whether good or evil, cannot be captured and convinced that helping a particular individual or group of individuals is the best way to accomplish their goals. Aliens cannot be captured. I suspect that zealots who have passed a certain point of insanity or mental illness cannot be captured. The Time Cube guy probably cannot be captured. The folks who listened to Mundum in The Northern Caves cannot be captured.

        If a hero adjusts their thought processes to become sufficiently neurodivergent, it becomes difficult to capture them because the psychological capturing tricks that are intended to exploit human psychology or cognitive bias no longer produce the expected results. Neurodivergence reduces their ability to make an impact on the world because it reduces their ability to understand human beings as well as generate social capital but less so than being fenced up and captured.

        2) If a hero has sufficient self-confidence in their own methods of changing the world, they will not become convinced that some new method (eg. working with people attempting to capture them) is the best way to optimize for the values that they desire. Someone with a sufficiently high amount of self-trust is essentially blind to input from the outside world regarding what they should or should not do.

        This can be achieved by inculcating high degrees of regular human arrogance, confidence, and mental techniques designed to block out stimuli from other people with regards to morality. It can also be achieved by tuning in to the voice of an egregore or alien god and refusing to listen to any other. It seems difficult to me to capture someone like Joan of Arc because there’s always a chance that the voices that she hears will lead her away from you and remind her that fulfilling your needs isn’t the best way to achieve her goals or to fulfill the will of God. Blind obedience to a set doctrine can have a similar effect when taken to a certain degree.

        These two methods have a high degree of overlap.

    2. To be clear, I am considerably, “darker” than what you might have in mind for a “true hero”, and all the connotations of social endorsement you might mean by that phrase are yours to decide if they apply to me. But my “darkness” does in fact come entirely from my determination to do good. I have a standard of justice and I follow it, and I don’t in fact attack people for selfish gain, even though this distinction is observed in retrospect over behavior that is the result of not drawing a distinction for a long time because I thought that society had fucked with my software to draw that distinction.

      Another way of describing the thing I am. I’m Maiev Shadowsong from Warcraft if she had a phoenix from hpmor to nudge her away from being an idiot in certain ways. (Her standards are partially societal, she is not a hero contract hero; she spends 10000 years watching a prison for no compensation, her only solace knowing the world is safe from super criminals), and towards the actually big problems; fighting the legion and the old gods, not Illidan.

  5. I know of someone who really didn’t like this post because they thought I was calling for “be loved” people like them to be driven out of communities for not being useful.

    I think this was, not-quite-the-right-word, but anthropomorphizing me. Treating my statements as being spoken to Schelling morality, rather than to good people.

    As a very rough estimate, I think single good is about 1/20 of the population, and double good 1/400. Neutral people are society. And society is mostly made of a dynamic equilibrium of mutual epistemic damage.

    I’m not trying to get nongood people out of society. I’m trying to partially remove good people from its effects.

  6. My guesses as to most common true biggest values are probably “continue life” and “be loved/be worthy of love”.

    This doesn’t seem quite right. I think to the extent it’s a terminal value, which is less than I previously thought, “be loved” is less important than “continue life” in probably almost everyone, including at least most of the people I used to consider to have “be loved” as most root. But “be loved” is easily advanceable through social reality-bound optimization, whereas “continue life” mostly requires real optimization.

Leave a Reply

Your email address will not be published. Required fields are marked *