I’m pretty sure I came up with this a long time ago, but I was reminded of it, so I decided to implement it again, and all it does is find edges in a scene. However, it’s a cheap and dirty version of the algorithm I describe here, and the gist is, rather than attempt to optimize the delimiter value that says whether or not an edge is found, you simply calculate the standard deviation of the colors in each row / column (i.e., horizontal and vertical) in the image. If two adjacent regions differ by more than the applicable standard deviation, a marker is placed as a delimiter, indicating part of an edge.



The runtime is insane, about 0.83 seconds running on an iMac, to process a full color image with 3,145,728 RGB pixels. The image on the left is the original image, the image in the center is a super-pixel image generated by the process (the one that is actually analyzed by the delimiter), and the image on the right shows the edges detected by the algorithm. This is obviously fast enough to process real-time information, which would allow for instant shape detection by robots and other machines. Keep in mind this is run on a consumer device, so specialized hardware could conceivably be orders of magnitude faster, which would allow for real-time, full HD video processing. The code is below, and you can simply download the image.
https://www.dropbox.com/s/mr366a9s8s0j4dr/std_dev_del_CMDNLINE.m?dl=0
https://www.dropbox.com/s/l72bic7n6sjv13o/calculate_total_im_diff.m?dl=0
https://www.dropbox.com/s/ii5s4h3e04n1cha/test_image_consistency.m?dl=0
https://www.dropbox.com/s/z5gcx7v8ej14zvg/generate_avg_color_image_vect.m?dl=0