This was a finance blog, and I was a derivatives lawyer. Most of the articles on finance and economics available here are also available on The Atlantic’s website. Now this is a science blog, and I’m quite busy revolutionizing physics and AI.
About My Work
I’ve reduced machine learning and deep learning to a set of three algorithms that are so fast, they can run on any consumer device. I’ve also rewritten all of special relativity using objective time, and developed a novel and unified theory of gravity, charge, and magnetism.
All of my work in physics and artificial intelligence follows almost entirely from the works of Alan Turing and Claude Shannon.
My model of physics treats reality itself as a computational engine, and in particular, treats elementary particles as combinatorial objects. I show that, remarkably, Einstein’s equations for time-dilation follow, despite the fact that my model has absolutely no superficial connection or similarity to relativity. In short, I’ve developed an entirely new model of physics that is closer to Newton’s idea of a mechanical universe by making use of contemporary theories of information and computation.
My model of artificial intelligence imitates tasks accomplished by machine learning and deep learning algorithms, but my model is radically more efficient than any other algorithms that I’m aware of: all of my algorithms have a low-degree polynomial runtime, allowing them to accomplish extremely high-dimensional, sophisticated tasks such as 3D object classification, projectile path prediction, and image classification, quickly and accurately on ordinary, cheap consumer devices.
The fundamental observation that underlies my model of AI is that the complexity of an object depends upon the level of granularity that we use to observe the object. If we take a very detailed view of an object, its complexity will be high, whereas if we take a less detailed, “impressionistic” view of an object, its complexity will be low.
This simple, common sense observation is remarkably useful. Specifically, my algorithms search for a local optimum level of complexity in between these two extremes, which I’ve found to be the point at which the actual structure of an object comes into focus. This allows, for example, my algorithms to categorize a dataset, or partition an image, with no prior information at all, simply by iterating through different levels of granularity until it finds the optimum level of complexity that reveals the actual structure of the data, or the image.
This simple initial procedure allows a core set of three algorithms (image partition, categorization, and prediction) to accomplish nearly everything that can be done in AI, with simple “plug-ins” that address the particular tasks at hand.
C.V.: Resume CDavi-2
Email: derivativedribble [at] yahoo [dot] com
And as imagination bodies forth
The forms of things unknown, the poet’s pen
Turns them to shapes and gives to airy nothing
A local habitation and a name.