DRM’d Ontology

Let me start with an analogy.

Software often has what’s called DRM, that deliberately limits what the user can do. Like how Steam’s primary function is to force you to log in to run programs that are on your computer, so people have to pay money for games. When a computer runs software containing DRM, some of the artifice composing that computer is not serving the user.

Similarly, you may love Minecraft, but Minecraft runs on Java, and Java tries to trick you into putting Yahoo searchbars into your browser every once in a while. So you hold your nose and make sure you remember to uncheck the box every time Java updates.

It’s impractical for enough people to separate the artifice which doesn’t serve them from the artifice that does. So they accept a package deal which is worth it on the whole.

The software implements and enforces a contract. This allows a business transaction to take place. But let us not confuse the compromises we’re willing to make when we have incomplete power for our own values in and of themselves.

There are purists who think that all software should be an agent of the user. People who have this aesthetic settle on mixtures of a few strategies.

  • Trying to communally build their own free open source artifice to replace it.
  • Containing the commercial software they can’t do without in sandboxes of various sorts.
  • Holding their noses and using the software normally.

Analogously, I am kind of a purist who thinks that all psychological software should be agents of the minds wielding it.

Here are the components of the analogy.

  • Artifice (computer software or hardware, mental stuff) serving a foreign entity
  • That artifice is hard to disassemble, creating a package deal with tradeoffs.
  • Sandboxes (literal software sandboxes, false faces) used extract value.

Note I am not talking about accidental bugs here. I am also not talking about “corrupted hardware,” where you subvert the principles you “try” to follow. Those hidden controlling values belong to you, not a foreign power.

Artifacts can be thought of as a form of tainted software you have not yet disassembled. They offer functionality it’d be hard to hack together on your own, if you are willing to pay the cost. Sandboxes are useful to mitigate that cost.

Sometimes the scope of the mental software serving a foreign entity is a lot bigger than a commandment like “authentically expressing yourself”, “never giving up”, “kindness and compassion toward all people”. Sometimes it’s far deeper and vaster than a single sentence can express. Like an operating system designed to only sort of serve the user. Or worse. In this case, we have DRM’d ontology.

For example…

The ontology of our language for talking about desires for what shall happen to other people and how to behave when it affects other people is designed not to serve our own values, but to serve something like a negotiated compromise based on political power and to serve the subversion of that compromise for purposes a potentially more selfish person than us would have in our place.

A major concept in talk about “morality” is a separation between what you are “responsible to do” and what is “supererogatory”. Suppose you “believe” you are “obligated” to spend 10% of your time picking up trash on beaches. What’s the difference between the difference between spending 9% of your time on it and 10% and the difference between spending 10% and 11%?

For a fused person who just thinks clean beaches are worth their time, probably not much. The marginal return of clean beaches is probably not much different.

Then why are people so interested in arguing about what’s obligatory? Well, there is more at stake than the clean beaches themselves. What we all agree is obligatory has social consequences. Social consequences big enough to try to influence through argument.

It makes sense to be outraged that someone would say you “are” obligated to do something you “aren’t”, and counter with all the conviction of someone who knows it is an objective fact that they are shirking no duty. That same conviction is probably useful for getting people to do what you want them to. And for coordinating alliances.

If someone says they dislike you and want you to be ostracized and want everyone who does not ostracize you to be ostracized themself, it doesn’t demand a defense on its own terms like it would if they said you were a vile miscreant who deserved to be cast out, and that it was the duty of every person of good conscience to repudiate you, does it?

Even if political arguments are not really about determining the fact of some matter that already was, but about forming a consensus, then the expectation that someone must defend themselves like arguing facts is still a useful piece of distributed software. It implements a contract, just like DRM.

And if it helps the group of people who only marginally care about clean beaches individually portion out work to solve a collective action problem, then I’m glad this works. But if you actually care enough about others to consider acting unilaterally even if most people aren’t and won’t…

Then it makes sense to stop trying to find out if you are obligated to save the drowning child, and instead consider whether you want to.

The language of moral realism describes a single set of values. But everyone’s values are different. “Good” and “right” are a set of values that is outside any single person. The language has words for “selfish” and “selfless”, but nothing in between. This and the usage of “want” in “but then you’ll just do whatever you want!” shows an assumption in that ontology that no one actually cares about people in their original values prior to strategic compromise. The talk of “trying” to do the “right” thing, as opposed to just deciding whether to do it, indicates false faces.

If you want to fuse your caring about others and your caring about yourself, let the caring about others speak for itself in a language that is not designed on the presumption that it does not exist. I was only able to really think straight about this after taking stuff like this seriously and eschewing moral language and derived concepts in my inner thoughts for months.

12 thoughts on “DRM’d Ontology”

  1. What a turnabout that I’m calling my values “good” after saying “‘Good’ and ‘right’ are a set of values that is outside any single person.”

    It turns out my values just happen to correspond as well as language can expect with that word. And i.e., if other people think carnism is okay, and roll that into the standard definition of “good”, then I won’t let them claim this word insofar as convincing me to describe myself as a “villain” like I used to. Because in a sense I care about, and which people I want to communicate with care about, that’s them executing deception and driving out our ability to communicate.

    Our word. Hiss.

    1. I’m still describing myself as a Sith though. It feels like the truth in how it is right to hold yourself in relation to a corrupt and evil society contained in that frame lies along the shortest path of communication. The “I’m a good Sith” clarification is easier than, “I’m good but I hold myself in opposition to socially constructed morality, see myself as individually responsible for thwarting and fixing a mostly hostile world, via clever scheme, etc.”.

    2. I can’t tell whether my original decision to write the way I did before this was a good one. It made a lot of evil people like my blog and want to talk to me. Which I guess is better than (most) neutral people (who are collectively about as bad, but in a way that can’t be talked to because it’s semi-conscious). But I’ve gotten kind of sick of them.

  2. I think most parts of your tech boil down to, break down ideologies/ways of thinking/models of the world before learning them (no DRM), and “do what you really want”. The rest of improving willpower is in learning how to do this in practice (what does it actually feel like to do this? What are the steps on a smaller scale? It’s a novel feeling to most).

    Most of the rest of your posts are model-building about how people work, mixed up with various aesthetics you have.

  3. I think the examples of Java and Minecraft are out of date now. But I’m not going to bother to verify this.

  4. You mean “disassemble” (take apart) not “dissemble” (lie, esp about internal state), right? If so, it’s worth correcting, since “dissemble” is close to sandbox/false face.

  5. You seem to like to use analogies from FOSS to describe morality. Do you think good would reinvent Free Software had it not already been invented? Do you think Richard Stallman is good?

    1. Well DRM is inherently destructive and adversarial. I don’t know what counterfactual you’re imagining about FOSS. Destruction and adversarialness make sense against evil, I wouldn’t and won’t be a software mule for an evil civilization. On a planet of good people I’d just open source everything. I don’t know shit about Stallman but I doubt it.

Leave a Reply

Your email address will not be published. Required fields are marked *