{"id":84,"date":"2017-11-29T21:31:18","date_gmt":"2017-11-29T21:31:18","guid":{"rendered":"http:\/\/sinceriously.fyi\/?p=84"},"modified":"2019-04-03T05:20:56","modified_gmt":"2019-04-03T05:20:56","slug":"subagents-are-not-a-metaphor","status":"publish","type":"post","link":"https:\/\/sinceriously.fyi\/subagents-are-not-a-metaphor\/","title":{"rendered":"Subagents Are Not a Metaphor"},"content":{"rendered":"

Epistemic status: mixed, some long-forgotten why I believe it.<\/p>\n

There is a lot of figurative talk about people being composed of subagents that play games against each other, vying for control, that form coalitions, have relationships with eachother… In my circles, this is usually done with disclaimers that it’s a useful metaphor, half-true, and\/or wrong but useful.<\/p>\n

Every model that’s a useful metaphor, half-true, or wrong but useful, is useful because something (usually more limited in scope) is literally all-true. The people who come up with metaphorical half-true or wrong-but-useful models usually have the nuance there in their heads. Explicit verbal-ness is useful though, for communicating, and for knowing exactly what you believe so you can reason about it in lots of ways.<\/p>\n

So when I talk about subagents, I’m being literal. I use it very loosely, but loosely in the narrow sense that people are using words loosely when they say “technically”. It still adheres completely to an explicit idea, and the broadness comes from the broad applicability of that explicit idea. Hopefully like economists mean when they call some things markets that don’t involve exchange of money.<\/p>\n

Here’s are the parts composing my technical definition of an agent:<\/p>\n

    \n
  1. Values
    \nThis could be anything from literally a utility function to highly framing-dependent. Degenerate case: embedded in lookup table from world model to actions.<\/li>\n
  2. World-Model
    \nDegenerate case: stateless world model consisting of just sense inputs.<\/li>\n
  3. Search Process
    \nCausal decision theory is a search process.
    \n“From a fixed list of actions, pick the most positively reinforced” is another.
    \nDegenerate case: lookup table from world model to actions.<\/li>\n<\/ol>\n

    Note: this says a thermostat is an agent. Not figuratively an agent. Literally technically an agent. Feature not bug.<\/p>\n

    The parts have to be causally connected in a certain way. Values and world model into the search process. That has to be connected into the actions the agent takes.<\/p>\n

    Agents do not have to be cleanly separated. They are occurrences of a pattern, and patterns can overlap, like there are two instances of the pattern “AA” in “AAA”. Like two values stacked on the same set of available actions at different times.<\/p>\n

    It is very hard to track all the things you value at once, complicated human. There are many frames of thinking where some are more salient.<\/p>\n

    I assert how processing power will be allocated, including default mode network processing, what explicit structures<\/a> you’ll adopt and to what extent, even what beliefs you can have, are decided by subagents. These subagents mostly seem to have access to the world model embedded in your “inner simulator”, your ability to play forward a movie based on anticipations from a hypothetical. Most of it seems to be unconscious. Doing focusing to me seems to dredge up what I think are models subagents are making decisions based on.<\/p>\n

    So cooperation among subagents is not just a matter of “that way I can brush my teeth and stuff”, but is a heavy contributor to how good you will be at thinking.<\/p>\n

    You know that thing people are accessing if you ask if they’ll keep to New Years resolutions, and they say “yes”, and you say, “really?”, and they say, “well, no.”? Inner sim sees through most self-propaganda. So they can predict what you’ll do, really. Therefore, using timeless decision theory to cooperate with them works.<\/p>\n","protected":false},"excerpt":{"rendered":"

    Epistemic status: mixed, some long-forgotten why I believe it. There is a lot of figurative talk about people being composed of subagents that play games against each other, vying for control, that form coalitions, have relationships with eachother… In my circles, this is usually done with disclaimers that it’s a useful metaphor, half-true, and\/or wrong … Continue reading “Subagents Are Not a Metaphor”<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[1],"tags":[],"_links":{"self":[{"href":"https:\/\/sinceriously.fyi\/wp-json\/wp\/v2\/posts\/84"}],"collection":[{"href":"https:\/\/sinceriously.fyi\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/sinceriously.fyi\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/sinceriously.fyi\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/sinceriously.fyi\/wp-json\/wp\/v2\/comments?post=84"}],"version-history":[{"count":2,"href":"https:\/\/sinceriously.fyi\/wp-json\/wp\/v2\/posts\/84\/revisions"}],"predecessor-version":[{"id":340,"href":"https:\/\/sinceriously.fyi\/wp-json\/wp\/v2\/posts\/84\/revisions\/340"}],"wp:attachment":[{"href":"https:\/\/sinceriously.fyi\/wp-json\/wp\/v2\/media?parent=84"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/sinceriously.fyi\/wp-json\/wp\/v2\/categories?post=84"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/sinceriously.fyi\/wp-json\/wp\/v2\/tags?post=84"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}