Motion Classification Using Time-Series Average

Continuing with recent articles on gesture classification and other user interface classification techniques, I realized late yesterday that you can classify some motions using a simple time-average, that produces motion blur, provided the motions are simple enough that this will work. In this case, it’s the same gesture dataset I’ve been working with in the past, where I raise either my left hand or my right hand. By taking a time-average of the images in a given sequence (i.e., movie file), this produces motion blur on one side of my body. This in turn, produces a single-image classification task, for which I apply my standard image classification algorithm. That is, each sequence of images produces a single image, that contains motion blur. I do not think this is needed for the touch-screen based user interface that I’m working on, but I thought it was interesting, since it takes what is arguably a hard problem, and turns it into a basic single-image classification task. This obviously won’t work for all tasks, but it will work for obvious gestures, and other simple motions. Specifically, it won’t work if the order of the motions in a gesture is important, since averaging destroys information about the order in which the underlying motions occurred.

The time-series average of one gesture movie file, shown as a single image.

In this case, the accuracy is perfect, though this is plainly a very simple dataset. I need to do more careful analysis to get a better sense of the actual runtime, but initial testing suggests it can process at least one frame per second, and I think more careful measurement of the runtimes of the algorithms themselves (as opposed to, e.g., loading and converting the images to grayscale), would yield a lower runtime, but further work on this is not rational, as I don’t seem to need it for anything. In terms of process, what this algorithm does is calculate the average for every pixel position over a sequence of images, and does so in a vectorized manner. On a truly parallel machine, the runtime of this step would depend only upon the number of images (i.e., the number of summands), not the number of pixels (i.e., the number of independent averages being calculated). Because the number of pixels is large, it is of course possible that this is not fully vectorized on, e.g., my iMac, but it’s still really fast, and it works.

Here’s the command line code:

Advertisement

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s