Some Thoughts on Random Variables

I’ve been working on a new model of probability theory in my free time, and though I’m nowhere near done with it, I’ve come up with what I believe to be a fairly elegant definition of a random variable that I think captures what it is that I’m trying to describe, which is ultimately rooted in computer theory and information theory:

The basic idea is that if a source is truly random, then prior observations should have no impact on future observations. We can express this rigorously by saying that a source S generates signals over the alphabet \Sigma, and that for any N observations, regardless of any prior observations, the set of possible sequences of observations is given by the full set of N^{|\Sigma|} strings. What this says is, the set of possible outcomes never changes, regardless of how many observations you’ve already made. One consequence of this is that you always have the same uncertainty with respect to your next observation, which could produce any one of the signals in \Sigma.

However, if the source is truly random, then it should produce a roughly Kolmogorov random string, eventually. If that’s not the case, then there will always be some computable process that can generate the observations in question. For example, the digits of a computable real number like \pi or e might seem superficially random, but they’re not, and are instead entirely determined ex ante by a computable process. If a source is truly random, then intuition suggests that eventually, it will evade modeling by a UTM, which implies that with a significantly large enough number of observations, the string of observations generated by the source should approach the complexity of a Kolmogorov random string.

We can express this rigorously by saying that for every real number \delta, and every number of observations n, there exists a number of observations N > n, for which,

1 - \frac{K(x_N)}{|x_N|}  < \delta,

where x_N is the string generated by making N observations of the source. What this says is, we can always make a number of observations that will bring the resultant string of observations arbitrarily close to a Kolmogorov random string, but this does not require actual convergence in the limit. Note that this definition does not require convergence to any particular distribution either, which is certainly possible for some physical systems, which could for example, simply change distributions as a function of time, or never have a stable distribution of states at all.

Advertisement

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s