Black Tree AutoML Pricing

I’ve come to a conclusion on pricing, and there will be two versions:

1. A Free Version that cannot be used for commercial purposes, that includes data normalization and nearest neighbor, including the “Magic Button”. This version will process the datasets in Swift, and is a standalone application that basically anyone can use.

2. A Professional Version for $199, per user per year, that includes basically my entire library, and a Session Log that allows you to see the history of what you’ve done, save and load a prior session, and also inspect and operate on the dataset through commands entered into the GUI (see “CMND:”, below the Session Log screen, in the picture below). The actual A.I. code will run in Octave for the Pro Version, which means the runtimes are the same as my A.I. library, allowing for basically instant solutions to all core problems in deep learning, including prediction, clustering, image classification, video classification, object tracking, anomaly detection, time-series analysis, etc., though with the ease of a GUI written in Swift. The basic animating principle is to allow bright people that don’t know how to code, to execute sophisticated A.I. algorithms, in an interface comparable to Excel in complexity, and to allow actual data scientists to basically dispense with all routine tasks in deep learning. Though not finalized, this is roughly what both versions will look like when it’s done:

Confidence-Based Prediction Search

I’ve written the Octave code for what I’m calling a “Magic Button”, that allows you to search through predictions, and find a subset of the dataset that generates the desired accuracy. This is possible because of a confidence-based prediction algorithm, which assigns each prediction a confidence score, using a method I developed rooted in information theory. As you increase the confidence threshold, the number of predictions that satisfy the confidence threshold generally decreases, and the accuracy generally increases. You can read about the method in my paper, “Information, Knowledge, and Uncertainty“.

The “Magic Button” algorithm simply takes an argument which is an accuracy threshold, and searches through the resultant curve for all points that are within 2.5% of the argument. This allows you to say, e.g., I want 90% accuracy, and then the “Magic Button” will find all points on the accuracy curve that satisfy this criteria (within 2.5%), the tie breaker being the number of rows associated with each point on the accuracy curve (remember, the higher the accuracy threshold, the lower the number of rows that meet the threshold). In the Mac OS version, this will literally be a button (see above), with a field beside it, that allows you to say, I want a given accuracy, and then all rows that satisfy that criteria are automatically saved to a file as a CSV dataset, which in the picture above, is the “Magic Button Dataset”. This will allow you to isolate the highest performing subset of your dataset, which is part of the overall economic philosophy behind the software, which is to impose efficiency on the market for A.I., since you can then punt the problem rows to a custom model, allowing an admin, e.g., to take responsibility for not only all routine matters of machine learning, but all rows of any dataset that can be automatically solved for as a group, leaving only the balance to a data scientist. If of course you request 100% accuracy, and it’s just not possible, then you’re going to get an error, and that’s life, but as a general matter, this method allows you to achieve well over 90% accuracy, given any reasonably well-made dataset.

Attached is code that applies this method to the UCI Credit Dataset, all 30,000 rows, and above is a plot of accuracy as a function of confidence (the increasing curve), together with another curve that shows the percentage of the rows that satisfy the confidence threshold (the decreasing curve). The target accuracy for the Magic Button is set to 92.5%, and returns an actual accuracy of 93.103%, with only 29 rows surviving at that accuracy. Note that this is distinct from the software that initially tested the method, in that this software runs exactly once (because it has to as a commercial product), producing a jagged curve, whereas the previous software generated a curve on average, over hundreds of randomly generated training / testing dataset subsets, producing smoother curves (since it was testing a thesis). I will likely offer both in the final product, because random testing allows you to produce a smooth curve of accuracy as a function confidence, which you can then use when you don’t know the classifiers of the testing dataset, which is probably what’s going to happen in the real world. Specifically, you can say, e.g., this testing row has a confidence score of X, and on average, rows with a confidence score of X, have an accuracy of Y.

As a reminder, you can download Octave for free from GNU.

Black Tree AutoML – Thoughts on Pricing

I’m of course working on my AutoML software, and in trying in to figure out what to offer at what price, I’ve rediscovered Nearest Neighbor, which I wrote about in my paper, “Analyzing Dataset Consistency“, proving a few results about its accuracy. Specifically, I showed that if your dataset is what I call “locally consistent”, in that classifications don’t change over some fixed distance, then the accuracy of the Nearest Neighbor algorithm will be perfect. As a practical matter, it means that for many real world datasets, the is accuracy is very high. As a consequence, I think it makes sense to offer only Nearest Neighbor and data normalization in the free version, and $5 per month version, with no clustering. Then, for significantly more money, I think about $100 per month, you get clustering, plus my confidence software, that allows you to “magically” increase accuracy, at the expense of the number of rows that satisfy the stated confidence criteria. I think this is both economically fair and practical, because what it does, is gives people commercially viable software based upon known technology, at a low price per month, in a convenient GUI format. Then, for real money, you get something that is geared towards an audience that is trying to capitalize directly from predictions (e.g., making credit decisions), as opposed to maybe making routine use of basic machine learning, as an expense, not a driver of revenue. Stepping back, I think the big picture is, my software imposes efficiency on the market for A.I., because routine machine learning can be totally commoditized, allowing an admin to simply process CSV files all day, and when problem datasets arise (i.e., they produce low accuracy), even if you’re too cheap to buy the better version of my software, you can kick that dataset up to a real data scientist, who will then spend more time on the hard problems, and basically no time, on the easy ones.

I’m not saying I’ll change my mind, but if you think that this is a bad idea commercially, shoot me an email at charles dot cd dot davi at [gmail dot com].

The attached code demonstrates this, and I’ve run it on the UCI Wine, Iris, and Parkinson’s dataset, all of which produce 90%+ accuracy.

Confidence-Based Prediction (Single Iteration)

I’m in the process of implementing my software for MacOS for commercial sale in the Apple App Store, and in particular, I’m marketing my confidence-based prediction as a “Magic Button”, because it actually does increase accuracy radically, as a function confidence. The issue is however, that I’ve tested it on average, over a large number of iterations, to be sure that I haven’t relied upon anomalies –

I.e., I tested it on several hundred random training / testing datasets, given an entire dataset.

This produced increasing accuracy, on average, as a function of confidence, regardless of the dataset, which suggests my ideas are correct, as an empirical matter. However, when applying this software in the real world, you have to be right the first time, not on average, because it doesn’t matter if your ideas are right, in general, it matters that they can be applied, every time. And so this led to a little tinkering with the equation for delta, and the result is awesome, consistently producing 90%+ accuracy, the first time, not on average, regardless of the dataset. Below is a plot of accuracy as a function of confidence, for the UCI Parkinson’s Dataset, which you’ll note is not as smooth as the curves produced in my previous note, because there’s no averaging, this is one shot, and it’s right.

The attached code is cued up for the UCI Credit Dataset.

Black Tree AutoML (Free Version)

I’ve updated the Free Version of Black Tree AutoML for MacOS, the major improvement being that if your dataset isn’t in CSV format, you’ll see an error message, rather than letting the program simply crash. It’s not a functional improvement, but it’s helpful, since it’s annoying and unprofessional to simply let a program crash, though it does add somewhat to the runtime, since every single entry in the dataset is scanned to ensure it can be understood by the prediction algorithm.

As a reminder, you can contribute to my Kickstarter Campaign, and in any case, all of the software should be up and running on the Apple App Store by the middle of February!

Enjoy!

UPDATE: There is apparently something wrong with DropBox at the moment, but I will keep trying throughout the day, and tomorrow to upload the file.

This link works for now.

If you have questions, you can email me at [charles dot cd dot davi at gmail dot com].