Measuring Spatial Diffusion

In a previous article, I introduced a method that can quickly calculate spatial entropy, by turning distances in a dataset into a distribution over [0,1], that then of course has an entropy. This however does not vary with scale, in that if you multiply the entire dataset by a constant, the measure of entropy doesn’t change. Perhaps this is useful for some tasks, though it plainly does not capture the fact that two datasets could have the same proportional distances, but different absolute distances. If you want to measure spatial diffusion on absolute basis, then I believe the following could be a useful measure, that also has units of bits:

\bar{H} = \sum_{\forall i,j} \log(||x_i - x_j||).

Read literally, you take the logarithm of every pair of distances within the dataset, which will of course vary as a function of those distances. As a result, if you scale a dataset up or down, the value of \bar{H} will change as a function of that scale. In a previous note, I showed that we can associate any length with an amount of information given by the logarithm of that length, and so we can fairly interpret \bar{H} as having units of bits.


Information, Length, and Volume

Information, Length, and Volume

I’ve written about this topic a few times, and having reviewed it, the articulation was a bit sloppy, so I thought I’d restate it a bit more formally. The basic idea is that there’s some capacity for storage in each unit of length, and this is physically true, in that you can take a string, for example, and subdivide its length into equal intervals with markings. Then, simply place an object upon one of the markings –

This is a unique state of the system, and placing the object upon each other such marking defines a different unique state of the system.

If there are N such markings, then the system can be in N states, and therefore, store \log(N) bits of information. This system is equivalent to a binary string of length N, where only one bit can be flipped on at a time. We can generalize the connection between length and information by assuming that the length is divided into some number of N segments, each of which can be in K states. To continue with the physical intuition, this could be done by assigning K objects to each of the N segments along the length, where the number of objects placed upon a given segment determines its state. For example, if you have 2 pebbles upon a given marking along the string, that would be the second state of that segment. This generalization associates a given length with a K-ary string of length N, that can be in K^N states, and store N\log(K) bits.

We can set a variable n to have units of K-ary switches per length, and so given a length l, we can then solve for N = nl. The number of bits that can be stored along the length is therefore given by I = \log(K^N) = \log(K^{nl}) = nl\log(K). Note that we can treat K  and n as constants as a function of l, and as a result, the information content associated with a given length l is O(\log(l)). We can generalize this to volume, where n would instead have units of K-ary switches per volume, from which it follows that the information content associated with a given volume V is also O(\log(V)).

This in fact demonstrates that there is a proportional relationship between substance, and information. For an exhaustive treatment of this topic, my first real paper on physics implies an actual equation that relates energy and information (See Equation 10), and they are in that case again proportional. What the work above shows is that as a practical matter, at the macroscopic scale, the same proportional relationship holds, since my paper implies that the information content of a system is O(E), where E is the total energy of the system.

Supervised Prediction

There’s some code floating around in my library (as of 6/21/21) that I never bothered to write about that generates a value of delta for each row in the dataset, independently, effectively implementing the ideas I go through in this paper on dataset consistency. What this means is, as a practical matter, you know how far you can go from a given point in the dataset, before you encounter your first inconsistent classification. For example, if x_i is the vector for row i of the dataset, then the algorithm finds the distance \delta_i, such that any sphere with a radius of \bar{\delta} > \delta_i, will contain a vector that has a class that is different from the class of x_i. Obviously, you can use this to do supervised prediction, by simply using the nearest neighbor algorithm, and rejecting any predictions that match to row i, but are further away from x_i than \delta_i.

This is exactly what I did in my first set of A.I. algorithms, and it really improves accuracy. Specifically, using just 5,000 training rows from the MNIST Numerical dataset, this method achieves an accuracy of 99.971%, and it takes about 4 minutes to train. The downside, is that you reject a lot of predictions, but by definition, the rejected rows from the testing dataset are inconsistent with the training dataset. What this means as a practical matter, is that you need more data to fill the gaps in the training dataset, but the algorithm allows you to hit really high accuracies with not much data, and that’s the point of the algorithm. In this case, 30.080% of the testing rows were rejected. But the bottom line is, this obviously catches predictions that would have otherwise been errors.

A Note on The Neutrino

The neutrino seems to be capable of the same type of indefinite movement in a vacuum that a photon is, and also apparently has a velocity of c. However, the neutrino also appears to have mass, which suggests a gravitational field. None of these things are possible in relativity, yet experiments suggest, that this is how it is. This is however, perfectly fine in my model of physics, and in fact, I think I now have an elegant theory as to what’s going on in the neutrino.

In a note from earlier tonight, I postulated that the indefinite motion of a photon is due to a different state of the force carrier of gravity, that upon interaction with the photon, causes it to change position, indefinitely, at least when undisturbed in a vacuum. Similarly, a mass emits this same force carrier, in a different state, as the force carrier for gravity, that in turn changes the momentum of the particles with which it interacts.

So in the case of the neutrino, we have this same force carrier at work, changing the position of the neutrino, and being emitted, as the force carrier of gravity. There are no issues with conservation, because the force carrier of gravity cannot carry finite momentum anyway, since it cannot be exhausted or insulated against, so it doesn’t matter how many instances of this force carrier exist, so long as it’s not zero, which is distinct.

This also suggests the possibility of unstable neutrino-like particles, because my model implies that this force carrier could also change the state of a particle, not just its position (see Section 3 of this paper). Bizarrely, it also suggests the existence of a particle that is physically stationary, and never changes state, and is therefore literally stationary at a given moment in time, though nonetheless emits gravity (see Footnote 7 of “A Unified Model of the Gravitational, Electrostatic, and Magnetic Forces“, which implies that such a particle would have a velocity of zero in time).

A Note on Absolute Time

Consider a truly stationary object, that also doesn’t change its properties, ever –

The object in question never moves, and never changes in anyway at all, though it exists, in that another system could cause it to accelerate.

Now consider time with respect to this object –

It simply doesn’t exist, in the absence of information about other systems, that are capable of change. As a consequence, if this system exists in a vacuum, in isolation, and is the only thing in the Universe, then there is no measurable time at all, since there is no measurable change.

Obviously, we don’t live in such a Universe, but it points to something fundamental, which is that time itself depends upon multiplicity of outcome –

If only one thing can happen, then time consists of only that one thing.

This in turn suggests that time is perhaps reasonably thought of as the set of all possible states of the Universe, in some connective order, that in turn defines what sequences of the Universe are physically possible.

A Note on Momentum, Light, and Gravity

In my model of physics, energy is quantized (see Section 2 of this paper), and moreover, energy is the fundamental underlying substance of all things. This gives photons real substance, since they’re comprised of nothing other than energy that happens to be moving. In contrast, mass is energy that happens to be stationary, in the absence of kinetic energy, which is simply light attached to mass, that causes the mass to move. This is obviously not inconsistent with Einstein’s celebrated mass-energy equivalence, but is instead more abstract, and includes and implies that mass and light are interchangeable and equivalent.

I also showed that from a set of assumptions that are completely unrelated to relativity, and that have nothing to do with time directly, and are instead rooted in combinatorics and information theory, you end up with the correct equations of physics, for everything, for time-dilation, gravity, charge, magnetism, and I’ve even tackled a significant amount of quantum mechanics as well (see this book, generally).

I spend most of my time now thinking about thermodynamics, and A.I., because the two are in my opinion deeply interconnected, and have commercial applications to drone technology, though I still think about theoretical physics, and in particular, the fact that light will apparently travel indefinitely in a vacuum. Related, in my opinion, is the fact that gravity cannot be insulated against. Both suggest an underlying mechanic that is inexhaustible, by nature. Moreover, mass-energy equivalence plainly demonstrates the connections between mass and light, which I think I’ve likely exhausted as a topic in the first paper I linked to above. However, I did not address the connections between the apparently perpetual nature of the motion of light in a vacuum, and the apparently inexhaustible acceleration provided by gravity.

I now think I have an explanation, which is as follows:

Whatever substance it is that allows for the indefinite motion of a photon is equivalent to the force carrier of gravity, though in a different state. Whatever this substance is, in the case of a photon, it is withheld by the photon, and in the case of mass, expelled by the mass, which we would therefore view as the force carrier of gravity.

This model effectively assumes that the force carrier of gravity also has two states, just like energy itself, which is either kinetic or massive. In the case of a photon, we have a force carrier that causes the photon itself to move, indefinitely. In the case of a mass, we have an expelled, independently moving force carrier for gravity, that causes unbounded acceleration in other systems (in that it cannot be insulated against or exhausted). In the jargon of my model, position is itself a code, as is what I call the “state” of a particle, which determines all of its properties. For example, the code for an electron is distinct from the code for a tau lepton (see Section 3 of this paper). This actually works quite well at the quantum level, where you have bosons that can literally change the properties of another particle altogether, which my model would view as an exchange of code, which is mathematically equivalent to momentum (see Equation 10 of this paper).

In this view, the gravitational force-carrier when acting on a photon changes the position code of the photon, indefinitely, causing it to move, indefinitely. Because code and momentum are equivalent in my model, this would be an exchange of momentum from the force carrier to the photon, that causes it to change position, which is, again, defined by a code. This view implies that the locomotive force of movement itself is due to this force carrier changing a code within the photon, which causes the appearance of motion over time. In the case of a mass emitting gravity, this force carrier would instead change the state of some exogenous particle, changing its properties, in this case its momentum and total energy.

There is a question of what happens to this force carrier, assuming it exists, when light travels through a medium –

Light certainly slows down in a medium, and light certainly changes behavior in some mediums, both of which suggest the possibility of perhaps separating light from whatever this force carrier is. If possible, then perhaps it could be applied to other systems, causing motion in those systems. Obviously, this would be a very valuable tool, if it exists, and further, assuming it can be separated from light, and further manipulated. Moreover, as a matter theory, it implies a conservation between mass and energy, in that mass emits gravity, whereas a photon does not, and this fills the gap.

You can quibble about this a bit, because the force carrier for gravity is emitted indefinitely, presumably separately, resulting in a large number of force carriers over time. In contrast, you arguably don’t need that to be the case to cause the position of a photon to change. However, because the accelerating power of gravity cannot be exhausted, gravity cannot cary finite momentum. Therefore, one such force carrier carries the same amount of momentum as any finite number of force carriers, and so, there is a conservation of momentum between the photon state of this carrier and the mass state of this carrier.

Object Detection Using Motion

I already wrote an object tracking algorithm that is quite fast, however that algorithm uses spatial proximity between points to determine whether or not a group of points are all part of the same object. That is, if a set of points are sufficiently close together, then they’re treated as part of one object. Then, that object is tracked as it moves.

It just dawned on me that you could also track an object by looking at the motions of a set of points. For example, if you have two objects, one of which is stationary, the other of which is moving, then the point data for those objects will reflect this. You can calculate the velocity of the points by using the nearest neighbor method to map each point in frame 1 to a point in frame 2, and so on and so on (this is exactly how my earlier object tracking algorithm works). You could then look at the change in position between each frame, and assuming each frame is taken over a uniform amount of time, that change in position is equal to the velocity of the point.

You would then cluster the points using their velocities, which would cause in this case the points in the stationary object to be clustered together, since its points are not moving, and cause the points in the moving object to be clustered together, since they’re all moving at roughly the same velocity.