Updated Touch Screen Algorithm (Including Dataset)

Following up on my previous note, I’ve now adjusted the technique of the camera-based touch screen method, not the algorithm itself, to make use of an external webcam pointed at the screen. This is for obvious reasons, since it focuses on your hand, though the previous note demonstrated that you get decent precision without even looking at the user’s hand. In this case, you get sensitivity to 1 centimeter of motion, with 95% accuracy, which I’ve tested using the on-screen ruler below:

A screen shot of my desktop, with the ruler on the top right.

The dataset was generated by placing my pointer finger at two points on the screen, each point, one centimeter from the other, adjusting my posture for each photograph. I did this for the top left of the screen, and the top right of the screen. There are ten images per class, for a total of forty images in the dataset. The prediction algorithm is simply nearest neighbor, though fully vectorized, with vectorized pre-processing of each image, which will allow for real-time processing on an iMac. The immediate goal is to translate this to a language that is native to Apple machines, which should allow for even faster processing, and direct access to devices.

 

If two classes of images are different to the human eye, then as a general matter, they will be different to the algorithm as well. This implies that a closer shot, taken from, for example, the four corners of a monitor, should produce even greater precision (i.e., sensitivity beyond 1cm), and greater accuracy at those precisions. Note that, you can easily produce a single total vector by just listing the images captured by each of the four cameras in some order. As a result, there’s no need for complex analysis to produce a single composite image from the four cameras –

You don’t do that, you keep the four images separate, and use each of the four resultant image matrices, that are then flattened in some order into a single row vector, with a number of columns equal to four times the number of columns you’d get with one flattened image matrix. For context, in Octave, the command to flatten a matrix into a row vector is M(:), where M is the matrix in question. So in this case, for each image capture (i.e., all four cameras firing), you would just have a row vector given by v = [M1(:) M2(:) M3(:) M4(:)].

Below you can find the dataset, and the command line code:

Touch Screen Dataset

Touch Screen Algorithm

The code attached below is the beginning of the algorithm I mentioned in a previous note on gesture classification to substitute for a mouse. It turns out, after initial testing, there appears to be enough information in your posture to figure out what point on the screen you’re touching, even without having a camera or other sensor monitor your actual hands. Based upon preliminary testing, it seems the range is one inch, in that it can tell within one inch of where you’re actually touching along some horizontal line.

I’m going to redo the dataset, not only because I’ve used too few images, but also, frankly, because I look awful, but all you need to do is touch the four corners of the screen, and take pictures of yourself a few times in each position, which will define four classes of images, with a webcam mounted at the top-center of your monitor (e.g., I’m working on an iMac). I did a bonus class of touching the middle left and middle right of the monitor, and the accuracy was still perfect. I wore headphones in some classes and not others, and it didn’t matter. The plain implication being that you don’t need to see or monitor your hands, as long as you have enough information from your total posture, to know where you’re pointing. I’m going to do another full dataset where I use an on screen ruler to be more precise. If that works, then I think that’s it, and the algorithm is already certainly real time, even in Octave, and so I suspect, that when in written in a native Apple language, it will only be faster.

A Note on Energy and Probability

If the Universe has a finite total energy (ignoring fields, which appear to be unbounded in energy over time), and energy is quantized, then there is with certainty a distribution over the possible energies of systems. For example, there is only one instance of the entire Universe, which means there’s only one possible system with an energy equal to the total energy of the Universe, at a given moment in time. We can then use combinatorics to partition this total amount of energy, into different systems, and basic counting principles will imply a distribution of possible energies. However, of course, we can’t be certain all such energies are actually physically possible, but the notion does imply that there is in fact an underlying distribution that describes the densities of systems of a given energy.

In particular, assume that the total energy of the Universe comes in N discrete chunks of energy. All systems must therefore be a subset of this total energy, and since we’re concerned only with total energy, let’s assume that there’s no order in which the particular chunks of energy are selected. This would reduce the number of possible instances of a system that consists of K \leq N chunks of energy to {N \choose K}. You would then remove physically impossible energy levels from this set, which is obviously not trivial, but the point remains, that there must be such a distribution, if the total energy of the Universe is finite, and energy is itself quantized. That is, there are only a finite number of N chunks of energy in the Universe, and therefore, a finite number of ways to select any K of them. Some energy levels might not be physically possible, and we remove those.

Note we are not counting how many configurations there are of a given energy level (e.g., scrambling their positions in space), but instead, counting the number of ways to assemble a collection of discrete chunks of energy into a system with some total energy. This simple method provides a basis for the intuition that extraordinarily high energy systems, and extraordinarily low energy systems, are both rare. Obviously, there’s more to a system than its total energy, since the components will have for example, position, and what I call, “state”, and together, I show that using state and position alone, you can get everything you need to define all the elementary particles, and time-dilation. What I didn’t discuss, which I concluded later, is that particles can actually change their relationships to one another, independent of state and position, since, for example, an atom, is simply not the same as an unassociated group of subatomic particles at the same distances –

There’s a relationship between the particles in an atom, that defines new physics, for example, electron orbitals, that you just don’t find with free electrons. The point being that what actually constitutes a system, is at times, objectively real.

Finally, we can also think about the number of possible ways to partition the energy of the Universe, assuming again that it’s finite and quantized, which will give us a sense of how the energy of the Universe can be allocated to the systems of which it is comprised. This would instead be given by the distribution defined by the Bell Number of the number of discrete chunks of energy in the Universe. These partitions would allow us to consider the possible states of the Universe as a whole, looking only to the energies of systems, that together comprise that configuration of the Universe.

Gesture Classification

I’ve revisited a simple video dataset I created a while back, where I raise either my right or left hand, using my latest image processing algorithms. The initial results are excellent, with 100% accuracy, and what’s better, the preprocessing step where each image frame of the video is converted to a data structure now takes about .2 seconds per frame, allowing for 5 frames per second to be processed. The prediction step, where a sequence of frames (already converted into a data structure) is classified as either a left hand or right hand sequence, takes about .01 seconds per sequence. This suggests as a general matter these algorithms can be applied to real time gesture classification given underlying video. As a result, I’m going to pursue the matter a bit further, and if other gestures can be classified with comparable accuracy, then I’m going to invest the time and write software in a language native to Apple devices, with the goal of substituting a mouse with gestures, but in a way that not only works, but is intuitive and easy to use. Moreover, I suspect there’s enough information in posture alone to figure out where it is that you’re pointing, which could allow users to simply touch the screen, even though their hands aren’t visible. That is, the webcam doesn’t need to see your hand, to know where you’re pointing.

Gesture Dataset

A Note on Multiplicity in Time

Introduction

I’ve been exploring notions of physical multiplicity for years, and I think you can make rigorous sense of it, using the ideas I’ve already developed in physics. Specifically, I think a multi-verse is fine, and in fact, you can imagine multiplicity expressed through a space that actually exists, where all possible outcomes from a given moment into the next, are actually physically real. Though this implies a question:

How are they arranged in that space?

As stated in a previous note, as a far as I know, multiplicity of outcome is real, given the same initial conditions, since collisions require only conservation of momentum (assuming nothing else changes in the collision). So given a moment in time, all future outcomes would be physically extant in the space of time itself, which would imply a space that grows, but is fixed at all generated points. That is, you’d have a tree that produces from inception, all possible next states. Instead, I think you also need a source that generates basically the same set of initial conditions over and over, and this would be like the Big Bang on repeat, with no material differences between proximate instances. This would cause mass to move not only through Euclidean space, but also through the space of time itself. Creating a space that is at any fixed point, basically the same forever, but nonetheless not truly fixed, with particles that have a velocity in both physical space, and time, entering, and leaving, onto the next, evolving as they progress through the space of time.

We can therefore imagine a line through the space of time itself, where nothing changes, because all exchanges of momentum net to zero, causing no change at all to the entire Universe, along that line. That is, this is the freak outcome Universe where every exchange of momentum nets to zero change, producing a static portrait of inception itself –

It’s the coin that always lands on its side, forever.

Imagine this as a line through a plane, with increasingly disparate outcomes at increasing distances from this line –

This is all you need to imagine the space of time, organized in the plane.

Dark Energy

Now imagine gravity and charge diffuse over both space and time. For if they instead diffuse over only space, then you end up with a law of gravity, e.g., that decreases in strength as a function of linear distance (just use basic trigonometry, assuming a constant rate and distribution of emission of force carriers projected towards a line of increasing distance from a point origin of the force carriers). This is, of course, wrong, because gravity obeys a square law of diffusion. So now instead assume that gravity diffuses over both time and space –

You end up with a square law of diffusion, which is correct. This in turn suggests the possibility that both gravity and charge are emitted through the space of time itself, in addition to Euclidean space. However, if this is the case, then it poses a related problem:

It implies that identical masses would be positioned proximately to each other in the space of time itself, subject to gravity. They would therefore attract each other, through gravity, which could literally cause a collision between two physically proximate moments in time. Since this doesn’t seem to happen, there must be a mechanism that ensures this doesn’t happen, though I think dark energy could be the result of gravity from proximate outcomes in time (note we wouldn’t be able to otherwise interact with the associated mass). That is, if dark energy exists at a point, it’s because there’s mass at that point in space, at a proximate point in time, but not in our time, causing the appearance of inexplicable gravity, that is unassociated with any mass. So if this is correct, there should be trace amounts of inexplicable gravity basically everywhere. Testing for this on Earth probably won’t work, because the Earth’s gravitational field is too dominant. So, put a device in space far enough from Earth and any other planet, and attempt to detect gravity that doesn’t appear to have an origin from any known mass.

Antigravity

You can posit forces to make sure that there’s no meaningful intrusion of mass from one moment in time into the next, which obviously doesn’t happen often at a large scale, otherwise we’d notice, though it is possible for spontaneous emergence and disappearance of energy at very small scales of time and space. For example, imagine a force of repulsion between masses, which as far as we know, doesn’t exist in our Universe, since it would be antigravity. However, it would serve the role well, because it would prevent large scale intrusions of mass crossing from one moment in time to the next, which as noted, plainly doesn’t happen. It would also complete the symmetry of gravity, which is anomalous in the context of charge and magnetism, in that there’s only attraction. Because light interacts with gravity, by analogy, we could posit that light interacts with this force, which would in turn prevent material intrusions of light as well.

The argument above suggests that antigravity obeys a square law of diffusion as well, which implies that antigravity would emerge in our Universe, basically everywhere, without any associated mass. However, because there’s no evidence of repulsion between masses, in order for antigravity to make sense, antigravity cannot cause acceleration in Euclidean space, and only acceleration in time. That is, gravity accelerates mass in only Euclidean space, as an attractive force, and antigravity accelerates mass in only the space of time, as a repulsive force. Finally, imagine the plane described above as a lattice of masses. Any mass that moves in any direction due to repulsion will be repelled in the opposite direction by some other mass, preventing any material change in relative positions in time, as they all progress through the space of time itself, their relative positions basically fixed.

Complex Length

Because we can’t physically measure complex numbers, it is at least sensible that distance in time actually has complex units, since we don’t experience and cannot measure movement in the space of time, we can instead only measure physical change in Euclidean space, as a proxy for actually traversing the space of time itself. Moreover, the mathematics also implies that time itself has complex units (see Footnote 7 of, A Unified Model of the Gravitational, Electrostatic, and Magnetic Forces).

Multiplicity, Time, and Waves

I’ve argued that multiplicity and waves are physically related, and this makes perfect sense, because waves are by definition a distribution over some space. Now consider the idea of not thermodynamic reversibility, but real mathematical reversibility, and consider it in the context of time –

How many initial conditions can give rise to the same final outcome?

Because conservation of momentum is typically the only constraint for collisions, the answer is an infinite number of initial conditions. So now imagine moving backwards through time, from a given state of a system, to its possible prior states, and what do you have?

A wave.

So this suggests at least the possibility that the transition from a point particle to a wave is the result of a change that flips a switch on the direction of time itself, causing a particle to propagate forwards through time, as if it were propagating backwards.

A Note on Physical Waves

As far as I know, exchanges of momentum between colliding systems are permitted, provided they conserve vector momentum. This suggests multiplicity of outcome, since there are an infinite number of exchanges of momentum between, for example, two colliding particles, that will conserve momentum. I happen to quantize basically everything in my model of physics, but it doesn’t matter, because you still get multiplicity of outcome, albeit finite in number. Note that a wave can be thought of as a set of individual, interacting frequencies, that together produce a single composite system. I’m not an expert on the matter, and I’m just starting to look into these things, but I don’t believe there’s any meaningful multiplicity to the outcome of a set of juxtaposed frequencies, and instead, I believe you end up with the same wave every time. This would make perfect sense if the quantity of momentum possessed by a wave is incapable of subdivision, which would either produce an interaction or not, between two individual waves. You could, for example, have wave interference, at offsetting points of two waves, each possessing equal quantity, in opposite directions, when and where they interact, producing a zero height at each such point. As the probability of interaction increases, you’d have an increasingly uniform zero wave.

Interestingly, this suggests the possibility that rules of physics actually have complexity, in the sense that you might have primitive rules for some interactions that impose what is in this case binary quantization (i.e., either it happens or it doesn’t). This is alluded to in Section 1.4 of the first link above, where I discuss the applications of the Kolmogorov Complexity to physical systems.

Partitioning Datasets

A while back, I had an idea for an algorithm I gave up on, simply because I had too much going on, but the gist is, my algorithms can flag predictions that are probably wrong, and so pop all those rows into a queue, and you let the rest of the predictions go through, in what will be real time, even on a personal computer. The idea is to apply a separate model to these “rejected” rows, since they probably don’t work with the model generated by my algorithm. This would allow you to efficiently process the most simple corners of a dataset in polynomial time, and then apply more computationally intense methods to the remainder, using threading, all the normal capacity allocation techniques, which will still allow you to fly in close to real time, you just delay the difficult rows until they’re ready. The intuition is, you stage prediction based upon whether the data is locally consistent, or not, and this can vary row by row within a dataset. And this really is a bright-line, binary distinction (just read the paper in the last link), and so you can rationally allocate processing capacity in this way, where if a prediction is “rejected”, you bounce it to a queue, until it has some critical mass, and you then apply whatever method you’ve got that works for data that isn’t locally consistent, which is basically everyone else’s models of deep learning.

Another Thought on Waves

If we interpret waves literally, then you can have a wave doesn’t have an exact location, but instead has a density or quantity function given a location. That is, I can’t tell you where a wave is, though I can delimit its boundaries, and provide a function that, given a coordinate, will tell you the density or quantity at that point. If information is conserved physically, and I show it isn’t in some cases, simply because energy is not conserved, unlike momentum, which is, apparently, always conserved, as far as I know. Specifically, gravity causes unbounded acceleration, which violates conservation of energy (macroscopic potential energy just doesn’t make any sense), but you don’t need to violate the conservation of momentum, if you assume that either an offset occurs when gravity gives up energy (e.g., the emission of some other particle or set of particles), or, gravity has non-finite momentum to begin with (see Equations (9) and (10) of, A Computational Model of Time-Dilation). Gravity is by definition unusual, since it cannot be insulated against, and appears to have the ability to give up unbounded quantities of momentum to other systems. At a minimum, the number of gravitational force carriers that can be emitted by a mass of any size appears to be unbounded over time. As a result, the force carrier of gravity is not light. The same is true of electrostatic charge and magnetism, neither of which can possibly be carried by a photon, given these properties.

If information is conserved in this case, then when a particle transitions from a point particle to a wave, the amount of information required to describe the particle should be constant. Let’s assume arguendo, that the amount of information required to describe the properties of the particle in question doesn’t change. That is, for example, the code for an electron is the same whether it’s in a wave state, or a point state. If this is the case, then the only remaining property is its position, which is now substituted by a function that describes the density of the electron at all positions in space, which will in turn delimit its boundaries, if it has any (i.e., a density of zero at all points past the boundary). Again, assuming information is conserved, it implies that the amount of information required to describe the density function of the wave will be equal to the amount of information required to describe its position, as a point particle. If it turns out that space is truly infinite, then that function cannot have finite complexity.

Plain English Summary of Algorithms

I imagine if you read this blog, you can probably figure out for yourself how things work, though it’s always nice to have a high-level explanation, since even for a sophisticated reader, this could mean the difference between taking the time to truly understand something, and simply dismissing it out of the interest of limited time. As a result, I’ve written a very straightforward explanation of the core basics of my deep learning software, that links to more formal papers that describe it in greater detail. I did this because I’m giving a brief talk at a MeetUp today, and since I already did the work, I figured I’d share it publicly.

Black Tree AutoML (Plain English Summary).

Defining a Wave

It just dawned on me you can construct a clean definition of a total wave, as a collection of individual waves, by simply stating their frequencies and their offsets from some initial position. For example, we can define a total wave T as a set of frequencies \{f_1, f_2, \ldots, f_k\}, and a set of positional offsets \{\delta_1, \delta_2, \ldots, \delta_k \}, where each f_i is a proper frequency, and each \delta_i is the distance from the starting point of the wave to where frequency f_i first appears in the total wave. This would create a juxtaposition of waves, just like you find in an audio file. Then, you just need a device that translates this representation into the relevant sensory phenomena, such as a speaker that takes the frequencies and articulates them as an actual sound. The thing is, this is even cleaner than an uncompressed audio file, because there’s no averaging of the underlying frequencies –

You would instead define the pure, underlying tones individually, and then express them, physically on some device.