Vectorized Interpolation

I’ve written an algorithm that vectorizes polynomial interpolation, producing extremely fast approximations of functions. The algorithm works by generating all possible polynomials of a given degree, and then evaluates all of them, but because these operations are vectorized, the runtime is excellent, even on a home computer.

As a practical matter, the runtime is of course sensitive to the algorithm’s parameters, which are (x) the number of possible values for each coefficient, and (y) the degree of the polynomial.

The polynomial is expressed as follows:

T(x) = \theta_0 + \theta_1 x^1 + \ldots + \theta_k x^k.

The algorithm works by quantizing the possible values of each \theta_i, which have an initial range of [-1,1]. By subdividing this interval into equally sized increments, we create a set of N = \frac{2}{increment} possible values for each \theta_i. We can then generate all possible resultant polynomials of a given degree k, by generating all possible ordered selections of k elements from a set of N elements. That is, each of the k coefficients \theta_i can take on any one of N possible values, resulting in N^k polynomials.

This algorithm then evaluates every resultant polynomial at every point in the domain, and selects the polynomial that produces the least total error, when compared to the range of the function.

Ordinarily, this would be an incredibly inefficient approach, but because all of these steps can be vectorized, the runtime is excellent, and on a sufficiently parallel machine, each step could be fully vectorized, likely allowing for basically instantaneous testing of every possible polynomial.

Running on an iMac, the algorithm generates and tests 160,000 fourth-degree polynomials in 2.38 seconds, and the best curve is shown below in green, together with the original sin curve in blue.

The algorithm generates and tests 160,000 linear approximations in 1.06 seconds, with the results shown below.

Note that increasing the degree of the polynomial of course increases the number of coefficients, and therefore, the number of possible polynomials. As a result, higher degree polynomials effectively include all possible lower degree polynomials, since any coefficient could be approximately or exactly zero. For example, using a fourth-degree polynomial could produce a roughly straight line. Therefore, the degree of the polynomial will of course impact the runtime on an ordinary machine. But nonetheless, even ordinary machines are clearly capable of vectorized computing, making this approach useful for essentially every modern device.

However, this is just a first step, and the real point of this algorithm is that it can be used as an operator in a more elaborate optimization algorithm. For example, rather than simply terminate after this one step, you could instead analyze the results, and repeat the process on some subset of the polynomials generated and tested. One obvious additional step would be to include linear terms that adjust the position of the curve along the x-axis:

T(x) = \theta_0 + \theta_1 (x - \alpha_1)^1 + \ldots + \theta_k (x - \alpha_k)^k.

We could apply the exact same process to the linear terms \alpha_i, and select the set that minimizes the resultant error.

As a general matter, you could take the view that this approach is similar to a Monte Carlo approach, but I would disagree, since this process is deterministic, and covers a complete range of values, at a given level of granularity, and is therefore, not random at all, but is instead deterministic. It is in my opinion the same approach I use to A.I. generally: select a scale of granularity in observation or discernment, and then use vectorization to the maximum extent possible, to solve the problem at that scale. Said otherwise, first compress the problem to a tractable space, and then use vectorization to solve the problem in that space.

All that said, this approach can be easily adapted to a Monte Carlo method, by simply generating random values of each \theta_i, rather than generating all possible values over some quantized space. The advantage of a Monte Carlo method would likely be even greater speed, since you don’t have to make use of the operators that generate all combinations and permutations, and instead simply use vectorized random number generation.

Obviously, you could also use this for classifications, by applying T(x) to a classification function, and if necessary, adding additional weights to the classifier values, since they are generally arbitrary.

Octave Code:

Advertisement

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s