Lemma: Decisions can be modeled by some algorithm
Proof: Consider writing down the list of all your actions. There is some algorithm which generates this string.
This is one of the main premises of timeless decision theory. The idea being that to “choose” makes no sense, you always make the decision that subjectively maximizes your utility function, you just aren’t sure what that decision is until you make it. The feeling of “being in control” comes from the process of generating counterfactual scenarios and evaluating their utility until we find a maximum. Because of the way our brain processes memory and imagination, these imagined scenarios feel like they “could have been”, if only we had made a different decision.
One of the practical implications of this approach is some advice on a LW willpower thread – that, rather than decide our action in a particular scenario, we should instead choose as if we’re choosing the output of our decision algorithm. This supposedly makes it easier to maintain (for example) a diet. But why should that be the case?
Let’s take for granted that our decisions are determined by some biological algorithm, but which algorithm? When we make decisions, it feels like each one is a fresh scenario – we could choose to do anything we wanted, it just so happens that we choose predictably. This corresponds to a pointwise encoding of our decision algorithm – the algorithm which simply stores the literal sequence and prints it.
Take for granted that we have some goal (say, to not eat icecream). This goal imposes a pattern on our sequence of actions, and whenever there’s a pattern to a sequence, we can compress it into a lower entropy representation.
Let’s assume for a moment that willpower is a semi-finite resource, and that decision fatigue and ego depletion are real effects. More generally, we can just assume that there is some information (semi)conservation principle in the universe, which seems plausible but is not well understood. In this setting, an agent would want to make high impact but low-complexity decisions – it must make tradeoffs between being correct and conserving energy, so it makes sense to choose a simple decision rule whenever possible. However, the real world is not so simple.
Consider an iterated game between two bounded agents. If the amount of processing power they have is greatly unequal, the stronger agent will nearly always win, because a more complex strategy requires a more complex response. Shifting this perspective, we can consider any environment as an agent. Clearly, the rest of the world has more entropy than you, so in general, any simple strategy you come up with will be incomplete. The best you can hope to do is make decisions point-wise, considering all prior information every time you make one. One way to think of it, which may or may not really be how the brain works (but which is still useful because it is a bound on all computational systems), is that every time you commit to a simple rule, you spin up a subprocess dedicated to that task. In practice, you can’t just “decide” to commit to a rule (as many LW zealots would suggest), it’s more like forming a habit. So committing to a decision rule (spinning up a subprocess) costs energy, but takes much less energy each time it is invoked, because it efficiently compartmentalizes information as an in/out process. A cute example I use is to always pick what the other orders at a restaurant (or their second choice, if they have one, to improve variety). It took a bit of time to think this up and commit to it (not much!) but it saves a lot of thinking in the future, with pretty good results. The general principle is that the best you can do in an infinite game is to pick simple rules that capture “most” of the value. However, it also costs energy to modify a strategy, killing an old habit is often harder than starting one – it gains momentum.
Shift perspective again to consider not individuals but systems: corporations, religious or political groups, etc. These too can be considered as agents (in a much more salient way than “the environment” in general), with their own goals and strategies. Such systems have a benefit we don’t: they can add computing power (members) relatively easily. This sort of cosmological expansion greatly magnifies the momentum of any existing strategies, as each new member is likely to inherit it. This pattern is commonly seen in the lifecycles of corporations: a lightweight company captures some market inefficiency with a new approach; it balloons up in success and becomes too rigid to adapt, either coasting their way into irrelevance or getting killed by a more agile competitor. If only they could stay at that sweetspot: powerful enough to afford risks, without being stagnated in bureaucracy.
The only groups that seem to resist it are those with “visionary” leaders that can synchronize the organization while still making quick decisions. But these groups are fragile, dying along with their leader. As soon as distributed decision-making is allowed, the system gains momentum, allowing persistence but preventing change.
When enough momentum is gained, weird things can happen, like practices that everyone (well, a sizable majority) hates but that never seem to change. What’s going on here? From the inside, as a single member of an organization, it can be maddening. You can see at a smaller scale than the kami you are a part of: what is “locally obvious” to you, may be too subtle, too expensive, for the organization to adopt. The microscopic reasoning is that simple, ambiguous ideas, can be fit into more people’s worldviews. Conflation (logically the “with” connective) plays a large roll here, allowing multiple ideas to be bundled together, choosing the best one for each convert. Conflation is dangerous though: it is not denotational, but operational, meaning that it does not preserve teleology so it’s end result is unpredictable, and will almost always take a mind of its own.
So you damn better seed your organization right the first time.