# How can I automatically distort (transform) a binary image to match a similar one?

1 view (last 30 days)
Jon on 20 Jul 2015
Commented: Jon on 18 Mar 2016
Larger context:
I am attempting to automatically extract inflection points from a river centerline, i.e. points where the curvature changes sign (or where the curvature is zero). These inflection points often correspond to "straight" sections of river, and they are well-known for being difficult to compute in noisy data.
My goal is to extract these inflection points along a migrating, meandering channel so that individual bends may be tracked. I'm actually not concerned with finding the "actual" inflection point, per se, because this is practically impossible to do consistently through time for the same reach. But I need some kind of reference points through which to connect the images through time. This becomes complicated because the river is growing its loops through migrating bends and shortening itself through cutoffs.
Specific problem:
After wrestling with inflection points a couple days, it occurred to me that I could perhaps take an image processing approach. My thinking is this: given two binarized river images (say, at t and t+1) which are quite similar, can I (automatically) compute a transformation between the two images such that a point in image t can be projected into image t+1? In this way, as long as the inflection points are properly resolved at t=1, I can iterate forward, reprojecting the inflection points as the channel evolves.
I have tried the matching features/estimateGeometricTransform route but it was a mess, presumably because I'm using a binary image (no intensity values). A cross correlation approach is also insufficient because it's a global transformation, and it's only translative. I need a more local transformation; one that will scale and skew locally to make the two river images (roughly) match. I also need to avoid manually creating control points because this is to be applied on a very large reach (>100 inflection points) for many time steps.
Note that in this figure, I chose a large dt between realizations to highlight the difference between reaches through time. In reality the distortion is much smaller between each time step (with the exception of the occurrence of cutoff). The goal is to find a transformation between t and t+1, then apply this transformation to the inflection points at t so they can be tracked to t+1.
Any ideas, tips, or suggestions would be very helpful!

Jon on 27 Jul 2015
Tracy's suggestion below of using PIV softwares gave me an idea which, after a bit of bookkeeping coding seems to work well.
The main problem was that I needed a local matching, rather than a global. PIV software basically does exactly this by breaking up the image into windows. Since a line can be represented in 1-D by its curvature signal, I decided to implement a windowed curvature-matching algorithm. Something like:
1. Retrieve index of inflection point (or any point of interest along centerline) from centerline at t=t1.
2. Window the centerline at t=t1 with the inflection point at the center.
3. Using the windowed signal from 2), shift this signal along the t=t2 centerline and compute a metric of similarity for each lag (autocorrelation didn't work well for me since the signals are somewhat cyclical; I used root mean square error successfully).
4. The minimum value of the similarity metric in 3) represents the number of indices by which to shift the inflection point of t1 to match its location at t2.
5. Optional: I added a substantial bit of coding to reduce the extents of the comparison in 3), but it wasn't worth the time. Perhaps if you were working with millions of data points it would be worth it.
sara moustafa on 16 Mar 2016
i'm pursuing MSc degree titled "Biometric human identification using hand veins patterns". i liked your suggestion and i'm interested in taking a copy of your matlab code of inflection points detection and windowed curvature-matching algorithm to use it in my work. I have segmented my hand vein images and extracted mono-pixel width vein skeleton and i need to test your suggestion for matching those vein skeleton images. thanks in advance.
Jon on 18 Mar 2016
Here are the codes I used, but I guarantee nothing for you. Instead of centerlines, you can just feed in your "mono-pixel width vein skeleton." You will also have to feed in a W (or modify to code). Use W=100 at first--it sets the window size of the search.
I use the position of the point, the direction of the line, and the curvature of the line to determine the point's location at the next time step. You can change the weights of each of these informations in lines 36-38. Good luck.

Tracy on 21 Jul 2015
Have you looked into Lucas-Kanade or other optical flow methods? There's an example on the file exchange here. Using a subwindow approach for cross-correlation may be useful; in experimental fluid mechanics we use PIV (particle image velocimetry) and PTV (particle tracking velocimetry). There may be some overlap between those methods and your application. The basic premise is that you compare a small subset of one image with a small subset of the other image, and by cross-correlating, find the 2D shift that yields the highest correlation. These slides by Kiger, Westerweel, and Poelma go into the technical side of it if you're interested.
Not sure how immediately helpful that is for your goal of finding the transformation, but just throwing some ideas out there!
Jon on 27 Jul 2015
Thanks for your suggestion, Tracy. I didn't end up going that route, but it inspired me to solve the problem using a somewhat analogous method.