On Complexity and Control

One of the first useful conclusions I came to in applying information theory to physics was that you could use Shannon entropy to measure the complexity of the motion of an object (see Section 1, generally). For example, light always travels in a perfectly straight line in a vacuum, and so it has a zero entropy distribution of motion. In contrast, a baseball has some wobble when observed at a sufficiently close scale, and therefore the entropy of its distribution of motion is greater than zero. Said otherwise, there’s some mix of velocities to most objects, even if what you see appears to be rectilinear at our scale of observation. To cut through any noise in observation, you could cluster the observed velocities, since this will cause similar velocities to be clustered together, and you can then measure the entropy of the distribution of the resultant clusters, which is something most of my clustering algorithms do anyway.

In the case of a system subject to mechanical control, you could of course have macroscopic wobble, as a result of some adjustment, which is generally undesirable. Instead, what you generally want, is a smooth rate of change, because even if it’s not a vehicle or vessel for humans or living systems, aesthetically, jumpy, uneven motions are not appealing, and could even give the impression of incompetence, or malfunction.

We can quantify this, again using entropy. For example, imagine a platform that has a series of four columns that support it from below, and we want to adjust its tilt. If you can only deliver one instruction at a time, then its motions will probably be wobbly. For example, let’s say we want to drop the two columns on the right to some lower, equal height, causing the platform to slope downward from left to right. Assuming all columns start at equal heights, we will need to deliver an equal number of decline instructions to the two columns on the right to cause them to drop to equal heights. This will result in a sequence of signals that are comprised of exactly two instructions, both equal in number, which will have an entropy of \log(2). And again, this will cause the platform to wobble, because the front and the back columns on the right will be at different heights at all times until the drop is complete. Now assume instead we deliver simultaneous instructions to both the front and back column on the right. This will produce a uniform set of signals, that consists of exactly one instruction, delivered repeatedly. Therefore, this set of signals has a zero entropy distribution.

What this simple example highlights is that we can use entropy to measure the complexity of the distribution of instructions delivered to some mechanism, which could allow us in turn to measure the complexity of the resultant behavior. We could go further, and attempt ex ante to minimize the entropy of the distribution of instructions over some collection of sets of instructions that all achieve the same end goal state. This minimum entropy sequence of instructions would be in this view, the simplest way to get to the goal state.