# How can I fasten my code?

3 views (last 30 days)
Rudy on 31 Mar 2016
Answered: sam0037 on 13 Apr 2016
Hi there,
My brother has an old photo taken from an airplane, but it is deformed due to the lens. The specs of the lens are unknown. He has another map on which all building that are currently in that area are drown. I have combined the two maps and we can clearly see the deformation. I have wrote a code that finds the deformation: we manually set reference points (for instance a green dot on the edge of a building and then a red dot on the same edge of that building on the other map). The script then finds the deformation of all those reference points (vectors). My plan: In a while loop, each pixel becomes the average of its first neighbours. The while loop stops when the difference between the current and former matrix (measured using the sum of all amplitudes) is some set small number. In the end we would have a matrix with vectors that approximately show the complete deformation of the photo. Applying the inverse deformation would then give the old photo, but in the real dimensions. THE PROBLEM: the photo is like 21kx19k pixels, which means HUGE matrices. It could take ages until the deformation is found. What I ask you guys: does any of you know a way to solve this problem quicker?
Kind regards, Rudy

sam0037 on 13 Apr 2016
Hi,
Let us break this work into two parts say 'Code Re-Designing' and 'Algorithm Re-Designing'.
Part 1 :: Code Re-Designing
Firstly, you would need to profile the code to find out the computationally expensive areas and then work on them. Secondly, if you have access to the parallel computing toolbox you can parallelize the tasks between individual workers.
The algorithm seems to be embarrassingly parallel where there are no dependency between the data sets in a iteration. Hence I would expect to see a good speed up after parallelizing. One way would be to breaking the image into n sets if you have n workers and allow each worker to work on them independently. Finally, rejoin them to get the original image. Please ignore this if the algorithm cannot be parallelized.
Part 2 :: Algorithm Re-Designing
a. You can re-size the images to a smaller scale and then work on them. This would certainly reduce the quality of your result. But again it depends on your objective.
b. Instead of looping over all the pixels during each iteration, considering marking the pixels of interest for the next iteration. What I mean to say is only a certain set of pixels would be contributing in identification of the measurable deformation between the images. These set of pixels are the pixels of interest for the next iteration. You would need to have a measure or a selection criteria to consider these pixels for the next iteration.
Hope this helps. All the best!!!