Rethinking My Original Work in A.I.

I introduced an unsupervised algorithm a while back that finds the geometric edge of a cluster, and it’s astonishingly efficient, and accurate, when you actually have data that is positioned in some kind of physically intuitive manner (e.g., macroscopic objects in 3D space). However, it’s not so accurate, when you’re dealing with even industry benchmark datasets. If in contrast, you use my supervised algorithm, accuracy is basically perfect. If you use my original approach, which is unsupervised, but tracks the rate of change in structure, over the entire dataset, as you increase the level of discernment, it works really well in general. This is surprising, because this third case is beyond the theorems I presented in the paper that defines the foundations of my work in A.I. Specifically, the piece that’s missing, is why this would be the correct value of delta. On a supervised basis, it’s trivial – it’s correct, because that’s what the training dataset tells you. In contrast, the unsupervised algorithm has no theoretical support, but it works astonishingly well. I’m thinking about this because I’m just starting to sell my work, and I don’t want to sell bull shit, but I don’t think anyone thinks this carefully anymore, so I’ve likely taken it a bit too far.

Advertisement

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s