Using Sorting in Clustering

I use sorting all the time in my algorithms to simulate running nearest neighbor (see Section 2.4 of Vectorized Deep Learning), and what just dawned on me, is that I actually proved formally, that a list is sorted if and only if its adjacent entries have the minimum possible distance (see Theorem 2.1 of Sorting, Information, and Recursion). This implies, the resultant sorted list, provides you with the nearest neighbors of each element in the list. This in turn allows for a trivial adaptation of my core algorithms, where rather than take the norm of the distance between a given vector and all others, you simply take the norm of the difference between a given vector and the vectors in the order in which they’re sorted in the list. The advantage in that case, is that if you’re not running the algorithms truly in parallel (which is the case on consumer devices when you have too many rows), then you’re only performing one operation per comparison. Attached is an example using my supervised clustering algorithm, which increases the radius of a sphere, until it hits its first error, which in this case means simply increasing the index of a sorted list, until you encounter a classifier that is unequal to the classifier in question (i.e., the origin of the sphere). This produces really fast runtimes, running in about 10 seconds given 100,000 rows with 15 columns – This is pretty serious stuff, and will be included in the Massive Version of Black Tree AutoML, for just $999. A mutually exclusive version (i.e., non-intersecting clusters) would typically produce even faster runtimes, since the size of the effective dataset can reduce each iteration.

For a testing dataset, you could simply combine the training and testing datasets, store the entries of the testing rows, and then go out some radius from each testing row by checking the classifiers of the rows to the left and right of each testing row. Applying the attached approach (i.e., first error), you would proceed until you encountered more than one class. You could instead proceed by no more than some fixed distance, or some fixed number of entries. You could report the modal class, or simply report the entire cluster of classes as a prediction. This will be extremely fast, since you’re operating only on the testing rows and the adjacent training rows, rather than the entire training dataset (save for the sorting step). I’ve attached code that implements this method, which seems to work really well, though more testing is required. I’ve included a basic confidence metric that also seems to work, in that accuracy increases as a function of confidence. This code is applied to the MNIST Fashion Dataset, which makes use of image preprocessing algorithms you can find in my A.I. Library on ResearchGate, but you can also simply ignore the preprocessing, as everything past the heading, “Runs Prediction”, is generalized and requires only a dataset.

Here is a plot of accuracy as a function of confidence over the MNIST Fashion Dataset:

Superficial Versus Functional Unity

So I just came to an amazing insight on the nature of life, and other systems generally:

There’s a big difference between superficial unity (e.g., something looks like a single object), and functional unity (e.g., an atom behaves as a single object). We know of examples of arguably random systems like cellular automata that exhibit superficial unity (they literally contain triangle-shaped outputs or Sierpinksi Traingles). But that’s not the same thing as an atom, that generally interacts as a single unit, or a composite particle, that again, despite being a complex object, behaves as a single unitary whole in general when interacting with other systems. And the reason I think this is connected to life, is because at least some people claim the origin of life stems from random interactions –

This is deeply unsatisfying, and in my opinion, an incomplete explanation of what’s happening.

Think about the probability of randomly producing something as large and complex as a DNA molecule, that has deep unitary functions, that copies itself, consumes its environment, and ends up generating macroscopic systems, that are also functionally unitary –

This is a ridiculous idea. For intuition, generate random characters on a page, and try to run them in C++, or whatever language you like –

What’s the probability you’ll even produce a program that runs, let alone does something not only useful, but astonishing, and with an output that is orders of magnitude larger than the input program? There’s no amount of time that will make this idea make sense, and you’ll probably end up needing periods of time that exceed the age of the Universe itself. A simple for loop contains about 20 characters, and there are about 50 characters in scope in programming languages –

This is not realistic thinking, as a program of any real utility, will quickly vanish into the truly impossible, with even a simple given for loop having a probability that is around O(\frac{1}{10^{30}}). For context, there have been about O(10^{17}) seconds since the Big Bang. Let’s assume someone had a computer running at the time of the Big Bang, that could generate 1 billion possible programs per second. Every program generated is either a success or a failure, and let’s assume the probability of success is again p = O(\frac{1}{10^{30}}). The binomial distribution in this case reduces to,

\bar{p} = np(1-p)^{n-1},

where n is the number of trials and p is the probability of generating code that runs. Because we’ve assumed that our machine, that’s been around since inception, can test one billion possibilities per second, we would have a total number of trials given by n = 10^{26}. This yields a comically low probability that is \bar{p} = O(\frac{1}{10^4}), even given the absurd assumption, that calculations have been running since the Big Bang.

Expressed in these terms, the idea that life is the product of chance, sounds absurd, and it is, but this doesn’t require direct creationism, though I think philosophers and scientists are still stuck with the fact that the Universe is plainly structured, which is strange. Instead, I think that we can turn to the atom, and composite particles, for an intuition as to how a molecule as complex as DNA could come into being. Specifically, I think that high energy, high complexity interactions cause responses from Nature, that impose simplicity, and literally new physics:

The physics inside an atom, is not the same as the physics outside an atom;

The physics inside a composite particle, is not the same as the physics outside a composite particle.

This does not preclude a unified theory, but instead perhaps e.g., different subsets or instances of the same general rules apply under different conditions, and that the rules change as a function of energy and complexity. So if e.g., you have a high-energy event, that is high in complexity, at the scale of the atom, then perhaps, this could give rise to a system like DNA. This is however, distinct from random processes that produce superficial or apparent unity (i.e., it looks like a shape), and is instead a process of Nature that imposes functional unity on systems that are sufficiently high in energy and complexity.

I am in some sense calling into question at least some of the work of people like Stephen Wolfram, that from what I remember, argue that the behavior of automata can be used to describe the behavior of living systems. I think instead you can correctly claim that automata produce patterns that are superficially similar to e.g., the coloring and shapes you find in Nature, but that’s not the same as producing a dynamic and unitary system, that superficially has a resemblance to the patterns you find in that area of computer science generally. The idea being that you have a discrete change from a complex churning physical system, into a single ordered system that behaves as a whole, and has its own physics that are again, distinct from the physics prior to this discrete change. It’s the difference producing a static artifact, that again has elements that are similar to known objects, and producing a dynamic artifact with those same superficial qualities. What’s interesting, is that we know heavy elements are produced in stars, which are plainly very high energy, very complex systems. Now think about how much larger and complex DNA is compared to even a large atom. If this theory is correct, then we would be looking for systems that aren’t necessarily as large as stars, but perhaps have even greater energy densities and complexities –

That’s quite the conundrum, because I don’t think such things occur routinely if at all on Earth. I think the admittedly possibly spurious logic suggests that higher energy stars, specifically black holes, might be capable of producing larger artifacts like large molecules. The common misconception that nothing escapes a black hole is simply incorrect, and not because of Hawking’s work, but because black holes are believed to have a lifespan, just like stars. As a result, any objects they create, could be released –

The intuition would be, powerful stars produce heavy elements, so even more powerful stars, like black holes, could produce molecules. And because the physics of black holes is plainly high energy and complex, it’s a decent candidate.

However, even if all of this is true, and we are able to someday replicate the conditions that give rise to life, we are still stuck with the alarming realization that there are apparently inviolate rules of the Universe, beyond physics, the causes of which are arguably inaccessible to us. Specifically, the theorems of combinatorics are absolute, and more primary than physics, since they are not subject to change or refinement –

They are absolute, inviolate rules of the Universe, and as a result, they don’t appear to have causes in time, like the Big Bang. They instead follow logically, almost outside time, from assumption. How did this come to be? And does that question even mean anything, for rules that seem to have no ultimate temporal cause? For these reasons, I don’t think there’s a question for science there, because it’s beyond cause. This is where I think the role of philosophy and religion truly comes into play, because there is, as far as I can tell, no access to causes beyond the mere facts of mathematics. That is, we know a given theorem is true, because there is a proof from a set of assumptions that are again totally inviolate –

Counting will never be incorrect, and neither will the laws of logic. And so we are instead left with an inquiry into why counting is correct, and why logic is correct, and I don’t think that’s subject to scientific inquiry. It simply is the case, beyond empiricism, though you could argue we trust the process because it’s always been observed to be the case. But this is dishonest, because everyone knows, in a manner that at least I cannot articulate in words, that you don’t need repeated observation to know that counting is inviolate. Moreover, I don’t think this is limited to combinatorics, but instead, would include any theorems that follow from apparently inviolate assumptions about the Universe. For example, the results I presented in “Information, Knowledge, and Uncertainty” fit into this category, because they follow from a tautology that simply cannot be avoided. Further, the results I present in Section 1.4 of “A Computational Model of Time-Dilation” also fit this description, because all measurements of time, made by human beings, will be discrete, and so the results again follow from apparently inviolate assumptions about the Universe. And so we are stuck with the alarming realization that there are apparently inviolate rules of the Universe, the causes of which are arguably inaccessible to us. That is, the laws of combinatorics seem to be perennially true, that follow from idea itself, independent of any substance, and without succession from cause in time.

So in this view, even if the conditions of the origins of life are someday replicable, the real mystery is the context that causes life to spring into existence –

The causes of the laws of the Universe, being inaccessible, perhaps in contrast to the conditions that produce life.

To take this literally, the laws of combinatorics, actually exist, outside time and space, and yet impose conditions upon time and space, that are absolute and inviolate. This space, assuming it exists, would have a relational structure that is also absolute and inviolate, as the logical relationships among the theorems of combinatorics are also absolute and inviolate. It is in this view, only the peculiarity of our condition, that requires the progression through time, which allows for computation, and in turn, the discovery of these laws and relationships, whereas in reality, they simply exist, literally beyond our Universe, and are alluded to, by observation of our Universe. To return to the topic of life, without life, there would be no ideas, and instead, only the operations of their consequences (i.e., the undisturbed laws of physics). Because life exists, there is, in this view, a connection, between the space of ideas, and our Universe. To lend credence to this view, consider the fact that there are no constraints on the Universe, other than those imposed by some source. For example, the written laws of mankind, the limitations of the human body, the observed laws of physics, and ultimately, the theorems of combinatorics, which will never change. The argument above suggests that, therefore, the theorems of combinatorics have a source that is exogenous to both time and space, yet to deny a source to the most primordial and absolute restrictions of our Universe, seems awkward at best. Moreover, humanity’s progression through science has repeatedly struggled with that which is not directly observable by the human body, such as radio waves, magnetic fields, and gravity, yet we know something must be there. In this case, logic implies that the laws of combinatorics exist in a space, that by definition, must be beyond both time and space, though its consequences are obvious, and with us constantly, and so that space, must have dominion over time and space, and is, from our condition, absolute and inviolate.

I suppose if you consider, and if you believe, that the laws of mathematics themselves are the result of work, of design, then there is only a finite amount of work to do that follows, in light of what must be, a literally infinite amount of work to set the conditions that allow for life itself.

Rethinking My Original Work in A.I.

I introduced an unsupervised algorithm a while back that finds the geometric edge of a cluster, and it’s astonishingly efficient, and accurate, when you actually have data that is positioned in some kind of physically intuitive manner (e.g., macroscopic objects in 3D space). However, it’s not so accurate, when you’re dealing with even industry benchmark datasets. If in contrast, you use my supervised algorithm, accuracy is basically perfect. If you use my original approach, which is unsupervised, but tracks the rate of change in structure, over the entire dataset, as you increase the level of discernment, it works really well in general. This is surprising, because this third case is beyond the theorems I presented in the paper that defines the foundations of my work in A.I. Specifically, the piece that’s missing, is why this would be the correct value of delta. On a supervised basis, it’s trivial – it’s correct, because that’s what the training dataset tells you. In contrast, the unsupervised algorithm has no theoretical support, but it works astonishingly well. I’m thinking about this because I’m just starting to sell my work, and I don’t want to sell bull shit, but I don’t think anyone thinks this carefully anymore, so I’ve likely taken it a bit too far.