Quantized Amplitudes, A.I., and Noise Reduction

Quantize a space of amplitudes to achieve a code. That is, each one of finitely many peak amplitudes corresponds to some symbol or number. So if e.g., a signal has a peak amplitude of 4, then the classifier / symbol it encodes is 4. Now posit a transmitter and a receiver for the signals. Send a signal from the transmitter to the receiver, and record both the known peak amplitude label transmitted (i.e., the classifier), and the received amplitudes at the receiver. Because of noise, the transmitted amplitudes could be different from the received amplitudes, including the peak amplitude, which we’re treating as a classifier. For every given received signal, use A.I. to predict the true transmitted peak amplitude / classifier. To do this, take in a fixed window of observations around each peak, and provide that to the ML algorithm. The idea is that by taking a window around the peak amplitude, you are taking in more information about the signal, rather than just the peak itself, and so even with noise, as long as the true underlying amplitude is known in the training dataset, all transmitted signals subject to noise should be similarly incorrect, allowing an ML algorithm to predict the true underlying signal. Below is an original clean signal (left), with a peak amplitude / classifier of 5, and the resultant signal with some noise (right). Note that the amplitudes are significantly different, but nonetheless my classification algorithms can predict the true underlying peak amplitude with great accuracy, because the resultant noisy curves are all similarly incorrect. Note that the larger the set of signals, the more compression you can achieve, since the number of bits required reduces as a function of the base of the logarithm, log_b(N), where N is the number you’re trying to encode. The datasets attached use 10 signals, with peak amplitudes of 1 through 10.

Attached is code that generates datasets, simply run Black Tree on the resultant datasets. The accuracies are very high, and it’s perfect for one way additive noise up to about 225% noise. For two-way noise (additive and subtractive), the accuracies are perfect for 25% noise, and about 93% for 100% noise. The noise is calculated as a real number, since the maximum amplitudes are have a distance of 1 from each other. So a noise level of 100% means that the transmitted amplitude can differ from the true underlying amplitude by at most 1. You could also do something analogous with frequencies (e.g., using fixed duration signals), though amplitude seems easier since you can simply identify peaks using local maximum testing (i.e., amplitude goes up then down), or use a fixed duration.

This process could allow for communication over arbitrary distances using inexpensive and noisy means, because you can achieve literally lossless transmission through the use of prediction. Simply include prediction in repeaters, spaced over the distance in question. And because Black Tree is so simple, it can almost certainly be implemented using hardware. So the net idea is, you spend nominally more on repeaters that have predictive hardware in them, and significantly less on cables, because even significant noise is manageable.

Two Notes on Materials

The density of a system could be non-uniform. Just imagine a baseball with feathers glued to its surface. If someone throws that baseball at you, getting grazed by the feathers is a fundamentally different experience than getting hit with the ball itself. This seems trivial and obvious, but it implies something interesting, which is that the density of a system could easily be non-uniform, and as a consequence, its momentum would in that case also be non-uniform. This is why getting grazed by the feathers is preferable, simply because they have less momentum than the body of the ball itself, despite the system moving as a whole, with a single velocity. As a general matter, this observation implies that interactions between materials, including mediums like air, could be locally heterogenous, despite the appearance of uniformity.

A second and not entirely related note is that all materials make use of free energy from fields. This must be the case, for otherwise gravity would cause everything to collapse to a single point. This does not happen because of intramolecular forces, that are the result of free energy from fields. This is again an obvious-in-hindsight observation, but it’s quite deep, for it implies that the structure of our Universe is due to the free energy of fields. That there’s a constant tension between the force of gravity and the intramolecular and atomic forces at work in literally every mass.

A Note on Suppressing Bad Genes

It just dawned on me that we might be able to cure diseases associated with individual genes that code for specific proteins, by simply suppressing the resultant mRNA. This could be accomplished by flooding cells with molecules that are highly reactive with the mRNA produced by the “bad gene”, and also flooding cells with the mRNA produced by the correct “good gene”. This would cause the bad gene to fail to produce the related protein (presumably the source of the related disease), and instead cause the cell to produce the correct protein of a healthy person since they’re given the mRNA produced by the good gene.

A Note on Turing Equivalence and Monte Carlo Methods

I noted in the past that a UTM plus a clock, or any other source of ostensibly random information, is ultimately equivalent to a UTM for the simple reason that any input can eventually be generated by simply iterating through all possible inputs to the machine in numerical order. As a consequence, any random input generated to a UTM will eventually be generated by a second UTM that simply generates all possible inputs, in some order.

However, random inputs can quickly approach a size where the amount of time required to generate them iteratively exceeds practical limitations. This is a problem in computer science generally, where the amount of time to solve problems exhaustively can even at times exceed the amount of time since the Big Bang. As a consequence, as a practical matter, a UTM plus a set of inputs, whether they’re random, or specialized to a particular problem, could in fact be superior to a UTM, since it would actually solve a given problem in some sensible amount of time, whereas a UTM without some specialized input would not. This suggests a practical hierarchy that subdivides finite time by what could be objective scales, e.g., the age of empires (about 1,000 years), the age of life (a few billion years), and the age of the Universe itself (i.e., time since the Big Bang). This is real, because it helps you think about what kinds of processes could be at work solving problems, and this plainly has implications in genetics, because again you’re dealing with molecules so large that even random sources don’t really make sense, suggesting yet another means of computation.

Storing Charges Without Compounds

It just dawned on me that you can simply store positively and negatively charged particles separately (e.g., electrons and protons), just like cells do, to generate a difference in electrostatic charge, and therefore motion / electricity. I’m fairly confident space near Earth, is filled with charged particles, since my understanding is that the atmosphere is our primary defense against charged particles, and the source of the Aurora Borealis. So, by logical implication, you can collect and separate positively and negatively charged particles in space, bring them back down to Earth, and you have a battery. Moreover, because it’s subatomic particles, and not compounds, you have no chemical degradation, since you’re dealing with basically perfectly stable particles. As a consequence, you should be able to reverse the process indefinitely, assuming the battery is utilized by causing the electrons / protons to cross a wire and commingle. Said otherwise, there’s no reason why we can’t separate them again, producing yet another battery, repeating this indefinitely. I’m not an engineer, and so I don’t know the costs, but this is plainly clean energy, and given what a mess we’ve made of this place, I’m pretty sure any increased costs would be justified. Just an off the cuff idea, a negatively charged fluid could be poured into the commingled chambers, and drained, which should cause the protons to follow the fluid out, separating the electrons from the protons again.

A Note on Net Versus Local Charge

I was puzzled by the motion of proteins within cells, which is apparently a still unsolved problem, and it dawned on me that whether or not this explains the motion, you could at least theorize that a given molecule prefers one medium over another medium, even if both have the same net charge, because of the distribution of the charges within each medium. That is, as a molecule gets larger, the small local electrostatic charges could produce macroscopic differences in behavior. So when a given molecule is equally distant from porous mediums A and B, each with the same net charge, it could be that the molecule naturally permeates one medium more often than another, due to the distribution of charges in the mediums and the molecule, not the net charges of either. This would allow for molecules and mediums with a net-zero charge to be governed by small scale electrostatic forces. If this in fact works, it would allow e.g., for DNA to produce protein mediums that are permeable only by molecules that have a particular distribution of charges, even if the net charge is zero. It would also allow for lock and key mechanisms at the molecular level (e.g., tubules), since the attraction could form a seal of sorts, which would not work unless the local charges map line up. This in turn would allow for specialization among tubules, where you could have multiple tubule types, each with their own corresponding charge distribution. It also implies that life could exist without organic chemistry, provided you have the same behaviors from some other set of compounds.

A Simple Multiverse Theory

In a footnote to one of my papers on physics (See Footnote 7 of A Unified Model of the Gravitational, Electrostatic, and Magnetic Forces), I introduced but didn’t fully unpack a simple theory that defines a space in which time itself exists, in that all things that actually happen are in effect stored in some space. The basic idea is that as the Universe changes, it’s literally moving in that space. That said, you could dispense with time altogether as in independent variable in my model, since time is the result of physical change, and so if there were no change at all to any system, you would have no way of measuring time, and therefore you could argue, that time is simply a secondary property imposed upon reality, that is measured through physical change.

However, we know that reality does in fact change, and we also have memories, which are quite literally representations of prior states of reality. This at least suggests the possibility that reality also has memory, that stores the prior, and possibly the future states of the Universe. Ultimately, this may be unnecessary, and therefore false, but it turns out you can actually test the model I’m going to present experimentally, and some known experiments are consistent with the model, in particular the existence of dark energy, and the spontaneous, temporary appearance of virtual particles at extremely small scales.

The basic idea is that you have a source, which generates what can be thought of as a Big Bang, producing an initial state of the Universe, S_0. That initial state is then operated upon by the laws of physics, producing the next state, S_1. Obviously time is discrete in my model. We can allow for non-determinism by simply viewing each S_i as a set of possible states, so that S_0 for example contains one state, whereas S_1 could contain any number of states. Conservation of momentum seems to be inviolate, whereas conservation of energy is plainly false, given that fields for example produce unbounded acceleration, and therefore an unbounded amount of kinetic energy. As such, if we want to allow for non-determinism, and therefore multiplicity, we can assume that the net momentum of any S_i is zero, which will guarantee that momentum is conserved, even if we allow for the eventual unbounded generation of energy (recall, each S_i is assumed to physically exist, and propagate through a space). Therefore, in a Universe that allows for non-determinism, that nonetheless conserves momentum, it must be the case that S_0 contains at least two instances of the Universe, each with offsetting momenta, or a single instance that has a net momentum of zero.

If we imagine the elements of each S_i as snapshots of the configuration of the Universe at a given moment in time, that are moving through some space, then it must be the case that something prevents them from colliding in any noticeable manner, with any noticeable frequency, since that plainly does not occur from our perspective. This can be accomplished with a force that is attractive to all energy within a given x \in S_i, yet repulsive to all other energy in any y \in S_i, and all z \in S_j, for all j. That is, this force would be attractive to all energy within a given instance of the Universe, producing cohesion, despite its velocity through the space of time itself, yet repulsive to all other energy, insuring that each snapshot of the Universe stays independent, without interacting with any other snapshot of the Universe. This force is obviously gravity, and moreover, the repulsive force completes the missing symmetry of gravity, producing a repulsive force between masses in some cases.

However, if we allow for small scale violations to this general idea of each snapshot of the Universe being independent, we could produce virtual particles that temporarily enter and then leave our timeline. This could also be the source of Dark Energy, that would constitute an unlikely, but possible macroscopic intrusion of energy from other timelines.

If the source at inception fires repeatedly, then you would have multiple instances of initial conditions that propagate in this space, but that’s perfectly fine, given the attractive and repulsive forces of gravity. If the source at inception generates the same initial conditions every time, then you’ll just have multiple instances of the same evolution. In this case, depending upon where we are positioned in the space of time itself, other snapshots of the Universe could literally contain our futures. If however, it generates different initial conditions, then you will have multiple evolutions. Ultimately, if the space of time truly exists, in this manner, then whether or not you have a multiverse, the past should be observable through some means, in particular, it should be possible to produce a virtual particle, that is a real particle in our timeline, and a virtual particle in another, and if it “comes back” with momentum that cannot be explained, then this would be evidence that it had in fact travelled to a different timeline, and interacted with an unknown system. Another test would be the existence of any wrong-way motion between energy that can’t be explained by other forces, suggesting the energy in question is not from our timeline, since in this view, mass that is not from our timeline is repelled.

Note that you don’t need a multiverse theory to explain either superposition or entanglement, at least in my model. Instead, superposition simply takes the fixed energy of a system, and allocates it to some number of possibilities, each being truly extant, with a fraction of the total energy of the system. Similarly, entanglement would occur in this view because you’ve simply taken the energy of some system, and split it macroscopically, creating two instances of the same system, each with less than the total energy, with the sum of the two equal to the total energy, that are therefore entangled, because they are one and the same system.

The Halting Problem And Provability

As a general matter, the question of whether or not a UTM will halt when given an input x, is not computable, in that there is no single program that can say ex ante, whether or not U(x) will halt, or run forever. We can restate this, if we iterate the value of x as a binary number, beginning with 1, and continuing on, and asking the question of whether or not U(x) will halt. We know this is not decidable in general, but it must be the case for each x \in \mathbb{N} that U(x) will either halt or run forever. This implies an infinite set of mathematical facts that are unknowable, as a general matter, and will instead require the passing of an arbitrary amount of time.

Now consider the question of whether you can prove that U(x) will halt for a given x, without running U(x). The Halting Problem does not preclude such a possibility, it instead precludes the existence of a generalized program that can say ex ante whether or not U(x) will halt as a general matter. For example, consider the function F(x) = x^2, for all x \in \mathbb{N}. We can implement F(x) in for example C++, and this will require exactly one operation, for any given input, and because C++ is a computable language, there must be some input to a UTM that is equivalent to F(x), for all x \in \mathbb{N}. As a consequence, we have just proven that an infinite set of inputs to a UTM will halt, without running a single program.

This leads to a set of questions:

1. If you can prove that a program P will halt over some infinite subset of \mathbb{N}, and not halt for any other natural number, is there another program R that will report 1 or 0, for halting or not halting, respectively.

2. If you can prove that a program P will halt over some infinite subset of \mathbb{N}, and not halt for any other natural number, is there another program R that will report 1 or 0, for halting or not halting, respectively, without running P or any equivalent program (as defined below).

3. If you can prove that a program P (and all equivalent programs, as defined below) will halt over some infinite subset of \mathbb{N}, and not halt for any other natural number, is there another program R that can provide a proof (in either human, machine, or symbolic language) that this is the case, for all equivalent programs (as defined below).

A program A is equivalent to program B if A(x) = B(x) for all x \in \mathbb{N}.

Note the existence of a single case where there is a proof, without a corresponding program R, would be proof that the human being that generated the proof is non-computable, and therefore, physics is non-computable.