How to identify a thin line in a noisy sideview picture?

I have a set of binarized pictures (like the following one) of the water surface in a small portion of flume, taken during a laboratory experiment. The water surface is represented by the thin curved line, but as you can see the picture is "corrupted" by other big white patches representing light reflections on the flume glass walls, that I could not get rid of.
I am working on a script that tries to detect the position of the water surface and fits it with a spline. I tried to remove the extra patches by filtering them by area (i.e. by removing the patches with pixel area lower than a threshold) before fitting the spline; but I am stuck because for many pictures the area of the water surface patch is roughly the same as those of the extra patches, so they are both removed by the filter. Moreover, occasionally the extra patches intersect the surface patch, as is the case in the picture below.
Can you help me figuring out a method to fit a curve to identify the surface, that is as less sensitive as possible to the presence of the extra patches? Thank you.

Answers (3)

Here are some ideas to get it closer:
BW = imread("example.png");
% Filter to remove patches (Used Image Region Analyzer App)
CC = bwconncomp(BW);
CC = bwpropfilt(CC,'EulerNumber',[-5, 1]);
CC = bwpropfilt(CC,'Eccentricity',[0.97, 1]);
CC = bwpropfilt(CC,'Orientation',[-1, 75]);
BW = cc2bw(CC);
% trying to remove intersected patches
BW2 = imerode(BW,strel('disk', 4));% erode to eliminate thin curved line to isolate patches
BW2 = imdilate(BW2,strel('disk', 4)); % dilate patches to previous state
imshow(BW-BW2)
%Apply previous filters again to get rid of the left over pixels.
BW = BW-BW2;
CC = bwconncomp(BW);
CC = bwpropfilt(CC,'EulerNumber',[-5, 1]);
CC = bwpropfilt(CC,'Eccentricity',[0.97, 1]);
CC = bwpropfilt(CC,'Orientation',[-1, 75]);
BW = cc2bw(CC);
% could filter area
imshow(BW)

1 Comment

@Kevin Holly, thank you very much for your help! I didn't know about bwpropfilt. I will try to experiment with it a little bit, especially with the other pictures in the set, for which the definition of the water surface and the amount of extra patches may be different.

Sign in to comment.

It would be better to eliminate the reflections first. I don't know what you tried. Did you try changing the geometry of the camera and light source(s)? Did you try crossed polarizers? Did you try using baffles? How about gaffers tape?
If you tried all that and still have reflections, then it might be that your binarization algorithm is not that great. What did you do to binarize it? Can you attach the original gray scale image so we can look at that?
How smooth is the surface of the liquid? Why are you using a spline? Why not use the actual coordinates? Why are you not fitting something like a cubic or quadratic polynomial to the surface? What are you going to do with the spline after determining it?
Have you tried deep learning on the gray scale image to find the surface line?

1 Comment

@Image Analyst thank you very much for your insight and I'm sorry for the terribly late response (life and work kept me very busy).
I'm getting back to this issue because the previous comment from @Kevin Holly, albeit very helpful in introducing me to the manipulation of b/w images, did not completely solve my problems. Please give me some time to re-organize my ideas and workflow and I'll get back to you.

Sign in to comment.

It looks like the blobs are all on the upper side of the water surface. So what I would do is just scan the image finding the lowest point in each column like this:
% Demo by Image Analyst
% Initialization steps:
clc; % Clear the command window.
close all; % Close all figures (except those of imtool.)
clear; % Erase all existing variables. Or clearvars if you want.
%workspace; % Make sure the workspace panel is showing.
format long g;
format compact;
fontSize = 18;
%--------------------------------------------------------------------------------------------------------
% READ IN FIRST IMAGE
folder = pwd;
baseFileName = "water surface.png";
fullFileName = fullfile(folder, baseFileName);
% Check if file exists.
if ~isfile(fullFileName)
% The file doesn't exist -- didn't find it there in that folder.
% Check the entire search path (other folders) for the file by stripping off the folder.
fullFileNameOnSearchPath = baseFileName; % No path this time.
if ~exist(fullFileNameOnSearchPath, 'file')
% Still didn't find it. Alert user.
errorMessage = sprintf('Error: %s does not exist in the search path folders.', fullFileName);
uiwait(warndlg(errorMessage));
return;
end
end
% Read in image file.
grayImage = imread(fullFileName);
% Get size
[rows, columns, numberOfColorChannels] = size(grayImage)
rows = 218
columns = 1305
numberOfColorChannels = 3
% Get gray scale version of it.
if numberOfColorChannels == 3
grayImage = grayImage(:, :, 1); % Take red channel.
end
% It's grayscale so binarize it.
binaryImage = imbinarize(grayImage);
% The first and last two rows and first and last two columns
% Are not valid data - it's some kind of a frame so get rid of it.
binaryImage = binaryImage(3:end-2, 3:end-2);
% Get size
[rows, columns, numberOfColorChannels] = size(binaryImage)
rows = 214
columns = 1301
numberOfColorChannels = 1
% Display the image.
subplot(2, 1, 1);
imshow(binaryImage);
axis('on', 'image');
impixelinfo;
caption = sprintf('Original Binary Image : %s', baseFileName);
title(caption, 'FontSize', fontSize, 'Interpreter', 'None');
% Maximize window.
g = gcf;
g.WindowState = 'maximized';
g.Name = 'Demo by Image Analyst';
g.NumberTitle = 'off';
drawnow;
%--------------------------------------------------------------------------------------------------------
% Find the lowest
[rows, columns] = size(binaryImage);
waterSurface = nan(1, columns);
for col = 1 : columns
% Extract this one column
thisColumn = binaryImage(:, col);
% Find out the last pixel in the column
r = find(thisColumn, 1, 'last');
if ~isempty(r)
waterSurface(col) = r;
end
end
% Plot it
% Display the image.
subplot(2, 1, 2);
% imshow(binaryImage);
% axis('on', 'image');
% impixelinfo;
% caption = sprintf('Original Binary Image : %s', baseFileName);
% title(caption, 'FontSize', fontSize, 'Interpreter', 'None');
hold on
plot(1:columns, waterSurface, 'r-')
axis ij
grid on;
% It's kind of noisy so fit a 4th order polynomial through the valid data.
validIndexes = ~isnan(waterSurface);
x = 1:columns;
validX = x(validIndexes);
validY = waterSurface(validIndexes);
coeffs = polyfit(validX, validY, 4);
Warning: Polynomial is badly conditioned. Add points with distinct X values, reduce the degree of the polynomial, or try centering and scaling as described in HELP POLYFIT.
% Fit data everywhere
fittedY = polyval(coeffs, x);
% Plot the fit.
plot(x, fittedY, 'b-', 'LineWidth', 2)
legend('Noisy data', 'FittedData', 'Location', 'north')
You could use movmedian to eliminate some of the noise spikes but really the fitting will do the same thing. You can see the blue fitted line follows the actual noisy data's mean pretty well.

2 Comments

@Image Analyst, thank you very much for your kind input! The outcome is really nice and would suit greatly for my purposes. However, your initial comment on the binarization of the pictures made my reconsider my process.
The issue I'm facing right now is that I need to trace the water surface over a lot of consecutive frames, while keeping the surface tracing process as much "user-input free" as possible. So the settings (contrast adjustment, binarization, speckles and reflections removal...) that may be good to "find" the surface on a given picture, might not be optimal for another picture on the same set.
You can find attached two raw pictures in my set, representative of the range of visual conditions I have to deal with. As you can see, the lighting conditions under which they have been taken are not great, and sadly I cannot get back to re-take them. In both of them the water surface can be seen (faintly); hence the need to post-process the pictures.
In short, what I do is:
1) remove the slightly noisy black-ish background that is common among the set of pictures: I do so by calculating and removing the mean of pixel intensity across all images;
2) adjust the contrast with imadjust, by using a very low intensity range (say, between 2000 and 6000);
3) binarize the images with a threshold defined by me (here I go through some trial-error until I find the threshold that highlights the most of the water surface while leaving out as much "speckles" and reflections as possible; but it's very hard to find a threshold that goes well for all pictures);
4) remove the remaning "dirt" and isolated pixels, and small blobs with bwareaopen (and again, the optimal parameters for it wildly vary according to how the previous steps have been undertaken, as well as among different pictures. Besides, on occasions in which the binarized water surface is fragmented, this process also destroys parts of the surface itself. Finally, sometimes some blobs survive also below the water surface, making the identification of the water surface as the lowest white point in each column a bit more problematic).
-----
As for the questions in your first comment, I answer here:
  • it was my first time taking pictures for Particle Tracking Velocimetry and the equipment for doing it was rather scarce and low-cost (a halogen lamp and a high-resolution camera). As I mentioned, unfortunately the experimental setting that I used to take such pictures has been dismantled and I have to take the best out of what I have;
  • the surface is always smooth and regular. There is no real reason for using a spline on a polynomial over real coordinates, other than having a slightly smoother final output. The final surface tracking output will be used to identify the crests and troughs of a wave train that is transiting in the field of view (the water surface is moving in response to this wave train);
  • I never tried deep learning.
You might try the interactive thresholding utility in my File Exchange:
It allows you to threshold the image with sliders and see the original image, the binarized/thresholded version, and the masked version all at once.
Somethings that novices do are unnecessary, such as contrast enhancement using imadjust or histogram equalization. All those do is make it easier to see but don't help with segmentation. They just make the threshold be different than they'd be originally. For example if the threshold in the original image is 90, then in the adjusted image it might be 40 ro 200 or whatever. But the point is once the threshold is picked the binary image will be the same regardless if you used contrast enhancement or not. So basically your steps 1 and 2 are not needed.
What might help is if you average several time adjacent frames together to reduce the noise. What's your frame rate? Do you really need that time resolution, or can you average two frames together and just get the profile for every other frame time? Like if the frame time is 0.1 (10 frames per second) and you averaged 2 frames together then you could get a less noisy frame but only every 0.2 seconds.
Deep learning would require you to hand trace the line for hundreds of training images (a subset of all your images), then train a model, and then you can run the model on all your images. Marking ground truth by drawing the line with imfreehand would be tedious.
Another way might be to use something like graydist to try to find the brightest path.
Or; use an optimization technique like A* to find the brightest path. Or try ridge-finding alrogithms like Hessian or Frangi filters, for example https://www.mathworks.com/matlabcentral/fileexchange/24409-hessian-based-frangi-vesselness-filter
The way that might work best is a way I used to do radiological blood vessel tracking for my PhD which was to use dynamic programming - unfortunately I don't have MATLAB code for it.

Sign in to comment.

Products

Release

R2025b

Asked:

on 17 Dec 2025

Commented:

on 12 Mar 2026

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!