Quantize a space of amplitudes to achieve a code. That is, each one of finitely many peak amplitudes corresponds to some symbol or number. So if e.g., a signal has a peak amplitude of 4, then the classifier / symbol it encodes is 4. Now posit a transmitter and a receiver for the signals. Send a signal from the transmitter to the receiver, and record both the known peak amplitude label transmitted (i.e., the classifier), and the received amplitudes at the receiver. Because of noise, the transmitted amplitudes could be different from the received amplitudes, including the peak amplitude, which we’re treating as a classifier. For every given received signal, use A.I. to predict the true transmitted peak amplitude / classifier. To do this, take in a fixed window of observations around each peak, and provide that to the ML algorithm. The idea is that by taking a window around the peak amplitude, you are taking in more information about the signal, rather than just the peak itself, and so even with noise, as long as the true underlying amplitude is known in the training dataset, all transmitted signals subject to noise should be similarly incorrect, allowing an ML algorithm to predict the true underlying signal. Below is an original clean signal (left), with a peak amplitude / classifier of 5, and the resultant signal with some noise (right). Note that the amplitudes are significantly different, but nonetheless my classification algorithms can predict the true underlying peak amplitude with great accuracy, because the resultant noisy curves are all similarly incorrect. Note that the larger the set of signals, the more compression you can achieve, since the number of bits required reduces as a function of the base of the logarithm, , where
is the number you’re trying to encode. The datasets attached use 10 signals, with peak amplitudes of 1 through 10.


Attached is code that generates datasets, simply run Black Tree on the resultant datasets. The accuracies are very high, and it’s perfect for one way additive noise up to about 225% noise. For two-way noise (additive and subtractive), the accuracies are perfect for 25% noise, and about 93% for 100% noise. The noise is calculated as a real number, since the maximum amplitudes are have a distance of 1 from each other. So a noise level of 100% means that the transmitted amplitude can differ from the true underlying amplitude by at most 1. You could also do something analogous with frequencies (e.g., using fixed duration signals), though amplitude seems easier since you can simply identify peaks using local maximum testing (i.e., amplitude goes up then down), or use a fixed duration.
This process could allow for communication over arbitrary distances using inexpensive and noisy means, because you can achieve literally lossless transmission through the use of prediction. Simply include prediction in repeaters, spaced over the distance in question. And because Black Tree is so simple, it can almost certainly be implemented using hardware. So the net idea is, you spend nominally more on repeaters that have predictive hardware in them, and significantly less on cables, because even significant noise is manageable.