Epistemic status update 2018-04-22: I believe I know exactly why this works for me and what class of people it will work for and that it will not work for most people, but will not divulge details at this time.
If you have subagents A and B, and A wants as many apples as possible, and B wants as many berries as possible, and both want each additional fruit the same amount no matter how many they have, then these are two classes of ways you could combine them, with fundamentally different behavior.
If a person, “Trent”, was a treaty made of A and B, he would probably do something like alternating between pursuing apples and berries. No matter how lopsided the prospects for apples and berries. The amount of time/resources they spent on each would be decided by the relative amounts of bargaining power each subagent had, independently of how much they were each getting.
To B, all the apples in the world are not worth one berry. So if bargaining power is equal and Trent has one dollar to spend, and 50 cents can buy either a berry or 1000 apples, Trent will buy one berry and 1000 apples. Not 2000 apples. Vice versa if berries are cheaper.
A treaty is better than anarchy. After buying 1000 apples, A will not attempt to seize control on the way to the berry store and turn Trent around to go buy another 1000 apples after all. That means Trent wastes less resources on infighting. Although A and B may occasionally scuffle to demonstrate power and demand a greater fraction of resources. Most of the time, A and B are both resigned to wasting a certain amount of resources on the other. Unsurprising. No matter how A and B are combined, the result must seem like at least partial waste from the perspective of at least one of them.
But it still feels like there’s some waste going on here, like “objectively” somehow, right? Waste from the perspective of what utility function? What kind of values does Trent the coalition have? Well, there’s no linear combination of utilities of apples and berries such that Trent will maximize that combined utility. Nor does making their marginal utilities nonconstant help. Because Trent’s behavior doesn’t depend on how many apples and berries Trent already has. What determines allocation of new resources is bargaining outcomes, determined by threats and what happens in case of anarchy, determined by what can be done in the future by the subagents and the agent. What they have in the past / regardless of the whole person’s choices is irrelevant. Trent doesn’t have a utility function over just apples and berries; to gerrymander a utility function out of this behavior, you need to also reference the actions themselves.
But note that if there was a 50 50 chance which fruit would be cheaper, both subagents get higher expected utility if the coalition be replaced by the fusion who maximizes apples + berries. It’s better to have a 50% chance of 2000 utility and a 50% chance of nothing, than 50% of 1000 utility and 50% of 1. If you take veil of ignorance arguments seriously, pay attention to that.
Ever hear someone talking about how they need to spend time playing so they can work harder afterward? They’re behaving like a treaty between a play subagent and a work subagent. Analogous to Trent, they do not have a utility function over just work and play. If you change how much traction the work has in achieving what the work agent wants, or change the fun level of the play, this model-fragment predicts no change in resource allocation. Perhaps you work toward a future where the stars will be harnessed for good things. How many stars are there? How efficiently can you make good things happen with a given amount of negentropy? What is your probability you can tip the balance of history and win those stars? What is your probability you’re in a simulation and the stars are fake and unreachable? What does it matter? You’ll work the same amount in any case. It’s a big number. All else is negligible. No amount of berries is worth a single apple. No amount of apples is worth a single berry.
Fusion is a way of optimizing values together, so they are fungible, so you can make tradeoffs without keeping score, apply your full intelligence to optimize additional parts of your flowchart, and realize gains from trade without the loss of agentiness that democracy entails.
But how?
I think I’m gonna have to explain some more ways how not, first.
I recently ran into someone saying they value duty and honor, truth and understanding, ambition and agency, and community and loyalty all equally. Before understanding this concept, I was confused by comments like this. What could that even mean? The units aren’t even the same! But now I suspect I understand.