Computation From Within

I’m going to talk about the qualitative computational strength afforded by evolution. More generally, I’m going to talk about what you gain when you weave a bunch of things that can do computation into an evolutionary game, a system of those things. In pseudocode:

Computes a => Meta a

Very simple, you might wonder if there’s anything useful we can say about this construction. First let me give some examples, in increasing order of complexity.

  1. Many perceptrons combine into an Artificial Neural Network (ANN)
  2. Many Actors combine into an erlang program
  3. Many cells combine into an organ
  4. Many organs combine into an organism
  5. Many organisms combine into a colony
  6. Many species combine into an ecosystem

Notice: Individual perceptrons are weak. In an ANN, each perceptron might only do simple addition, but the full network can be turing complete. Clearly, there is something gained here, but where does this extra power come from? We’re not just talking about extra memory or efficiency from added processors, we’re talking about a qualitative expansion in the types of things that can be computed; this is a big deal. The only place for this extra power to hide is in the connections between neurons. We can thus view ANN quite clearly as directed graphs with nodes labeled by perceptron computations.

If we generalize perceptrons to be arbitrary functions rather than simple arithmetic, we get something more like an Actor Model. Since ANN are already turing complete, this would at first not seem to gain us much other than convenience. Consider though, an actor can do something a simple function cannot, it can fail, and in fact erlang is famous for gracefully handling process failure. You can see now the relation to evolutionary games. If we interpret life forms as a hypothesis about how to survive the environment, then it’s a nice property that one hypothesis can fail without bringing the whole system down. But we’re still missing the secret ingredient to life: if we start with a bunch of hypotheses, that’s just one big meta-hypothesis – eventually they could all fail and then we’re out of luck. What we need is a way to introduce new hypotheses. What we need is a Monad.

Unlike an ANN or an erlang program, lifeforms can replicate themselves (approximately). For simplicity let’s confine ourselves to asexual reproduction. If we take ‘Meta a’ to be the type of a group of ‘a’s, then reproduction is an arrow (a -> Meta a). Naturally, reproduction happens within a larger group, so we can always stitch a reproduction into the larger whole, so we really have (Meta a -> (a -> Meta a’) -> Meta a’). This gives us something quite powerful: a chance at immortality. There’s an old math puzzle that sets up like

Suppose we have a bacterium. At each time step, the bacterium either dies, or splits in two, with probability p, 1-p respectively

It turns out that the exponentially branching growth cancels the exponentially decaying chained probabilities and we get a finite probability that the bacterial lineage never dies (Try to prove it!). Now, this comes just from a constant death probability for each bacterium. In real evolution we can do better, the organisms with a better p are more likely to live longer, so we expect the average p to increase steadily. Barring large extra-systemic constant fluctuations (like the planet exploding), “life” (which is to say, descendents of the proto-slime) is pretty darn near immortal.

Note: I’ve described asexual reproduction here because it’s simpler, sexual reproduction also requires (local) interaction of the group, rather than an individual, but it’s otherwise similar.

What does this mean for us lowly humans? The good news is that ‘humanity’ is a sort of meta-thing, and so has all the strengths I’ve spoken of. The human meta-entity absorbs knowledge from its constituents, immortalized in a chain of human communication. Even while individuals and groups may wax, wane, and die, humanity marches endlessly forwards. The bad news is that I’ve hidden a problem from you. I’ve hidden it because I’m not sure how to fix it. Unlike the bacterium example which produces exact copies of itself, real replicators don’t produce exact copies. Certainly our chain of decedents will be near immortal, but in what sense will they be “the same” as us?

We’d like to say that descendants are “the same” in the sense that they are clustered nearby in thingspace. This gives us a clue about what sort of things should be allowed to go meta. Particularly, it makes sense to talk about a collection of a as an independent Meta a in some context if it’s behavior in that context does not depend strongly on the behavior of any individual or small group. That is, Meta a is differentially private! This criteria makes it clear that one of our previous examples doesn’t work as well as the others. While an organ is stable even if a small group of its cells die, an organism has much less tolerance for organ failure – a small heart defect can take the whole system down! The teleological view is that the body uses heterogeneous organs to save resources in making a “minimum viable human” and makes a stability tradeoff in doing so – homogeneous systems are more stable because they have more symmetries.

Shift perspective downwards: The brain is very stable to seemingly dramatic rewirings, not just individual cells, so maybe it’s build on something larger? Take the internal view, that the mind is composed of many competing subprocesses vying for control, each one thought of as a hypothesis about which action to take. This creates a sort of evolutionary game for thoughts (spatiotemporal firing patterns), where a thought lives when it’s firing and is otherwise in stasis/dead. The individual thought dies but the mind salvages the remains and is better for it. The power of the mind is the ability to keep playing.

Both individual human minds and the human meta-minds (Kami) are turing complete, so they should be able to process the same sort of things. Digital immortality suggests that thoughts should be substrate independent. Humans are fragile, the meta-mind is immortal; yet we can live only through our own eyes. Is it possible or even meaningful to “blow up” a human mind, embedding it instead as a distributed entity, rather than porting it one-for-one to a computer? I suspect not totally, since the network topology of human civilization is very different from that of the brain (for one thing, there’s a lot more latency). However, the tales of “charismatic leaders” becoming Kami is tantilizing, and suggest that at the least, human minds can act as a seed for distributed entities.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s