Distinguishing between colors in LAB color space

This is a general question which will help me in formulating the algorithm.
I have converted my RGB image to CIELAB color space. Now I will be using the A and the B channels for distinguishing features based on their hues. Hypothetically, if I take a small rectangular image patch within the A channel. Note that this rectangular image patch consists of different hues due to inclusion of two different feature parts within the patch. Then how can I numerically distinguish between the those different hues and classify them to be different from one another? I do know about DeltaE for color comparison but here I am talking about distinguishing hues within one channel of the CIELAB color space.
Any ideas or links would be appreciated.

Answers (2)

"hues within one channel of the CIELAB color space" is a meaningless statement.
If you take a rectangular ROI in your image (and it does not matter if it's the A channel or any other channel), then the pixels in that region will probably have different RGB values and thus different LAB values. So if there are 1000 pixels in the patch, you could have potentially 1000 unique colors. You have to decide what "distinguish" means to you. They are already distinguished because they have different values. To classify the region you have to decide what your classes are, like a certain range of hues or whatever. For example, you could transform the A and B channels into a Hue channel (or better yet, just get it directly from the rgb2hsv() function) and then create 10 color classes of hues, like color1 = hues between 0 and 0.1, and color2 = hues between 0.1 and 0.2, and so on. Then you can just threshold your hue channel image 10 times to get 10 different color classes. Each of the 10 images would show only pixels from that one color class.
You could use delta E if you want to specify some reference color, and then get the delta E of all other pixel colors from that one. I attach a demo where I do that on a sliding window so you get a localized delta E. The delta E's from the mean in the sliding window are computed. High delta E's will appear at places where the color is highly variable (like an edge).
My File Exchange has lots of color segmentation/classification demos, as you probably know. http://www.mathworks.com/matlabcentral/fileexchange/?term=authorid%3A31862

10 Comments

This is very informative. But I am a bit confused now since I need to detect different colors within the image, store each color in separate arrays and then compute the average of each color present within the the image. Is there any possibility to do these operations in LAB color space?
I think I can use Delta E to detect different colors, Once I obtain the RGB output image with the detected color image, I just need to store the detected colors in separate arrays. The number of arrays will be equivalent to the number of colors detected. I can use HSV color space for storing. Then compute the average of the color stored in each array. I know I must be wrong somewhere so any further ideas would be much appreciated.
Thanks
You can take the mean of the L image, the mean of the A image, and the mean of the B image if you want. That will give you the average color of the whole image. Is that what you want? Of course you can do that to get the average color and of course you can do it on the LAB images instead of the RGB images (you can do it there too if you want). But what are you going to do with that info? Is that going to be your reference color to computer color difference? You can compute a delta E image with that and the component images easily:
deltaE = sqrt((Limage-Lref).^2+(Aimage-Aref).^2+(Bimage-Bref).^2);
But you don't need to do that to find out how many unique colors there are. For that you need to take the 3D histogram in RGB space and count the number of non-zero bins. Do you know how to do that? But then I don't know what you want to do after that.
Basically I want to compute the average of each color that is present in the image and not the average of the whole image. For instance, if the image consists of red, green and yellow color that are varying in terms of brightness, then I want to take the average of red color, green and then yellow color separately.
mona, that doesn't make sense. If you have a million pixels in an image, you could have a million completely different colors. So what's the average of that? Even if you have a million pixels and 100,000 unique colors, what do you mean by average? Let's say one of the 100,000 colors is (123, 180, 221) and let's say 3000 pixels have that particular color. So what is the average of that color? Well, it's (123, 180, 221) - it's not affected by the fact that 3000 pixels have that color.
What I think you want to do instead is to convert to an indexed image with rgb2ind() which will group similar colors. So you could say you want 10 colors and it will give an image with values of 1-10 that indicate which color class a particular pixel's color belongs in. Then you could get a count of the pixels in some color class, say class 3, with a line of code like this
[indexedImage, colorMap] = rgb2ind(rgbImage, 10);
countClass3 = sum(sum(indexedImage == 3));
Display the quantized image like this:
imshow(indexedImage);
colormap(colorMap);
colorbar;
Please take a look at this web site. http://en.wikipedia.org/wiki/Color_quantization
Thanks Image Analyst. The idea of color quantization never striked me before. I have one question which made me wonder after I went through some research papers which proposed color quantization technique in CIELAB color space. Once the image is converted to LAB color space, what exactly is meant by partitioning the LAB color into cuboids? I understand cuboids means having L, a, b channels together. So does having channel a and b together enables us to partition the ab space into rectangular blocks instead of cuboids, is that right? I know this is a bit silly but I just want to confirm.
If you chop your color space into rectangular blocks, you can reduce the size of a lookup table. For example, if you wanted to alter the colors of a live RGB video stream, you'd need a lookup table of 48 MB, or 256 red by 256 green by 256 blue by 3 colors. You'd use the RGB values as an index into that lookup table to get the new RGB values. This used to be a very large look up table. To get around the memory issues, they'd use a 32x32x32 lookup table. Much smaller and faster. You could have some quantization artifacts but often, for many pixels, they weren't that noticeable. I'm not really sure why you'd do it in LAB color space, but if there's a reason I'd bet it was for the same issues - smaller memory and faster implementation.
I'm still going through this subject and in most of the research articles, partitioning has been implemented in LAB color space which made me wonder in general.
My main motive for asking the previous question was to understand how can one compute centroid of each cuboid. This stems from understanding what these cuboids represent. Now since they represent bins of a color histogram, for instance, in case of partitioning RGB color space, if we divide the R axis into 5 regions (bins), G and B axes into 4 regions (bins), we can obtain a structure of 5×4×4=80 rectangular cuboids. How is it possible to obtain a single representative centroid from 3D matrix (cuboid)?
There are plenty of articles mentioning the same. One of them is the following (page 4, section A: color space partitioning): http://liris.cnrs.fr/Documents/Liris-3058.pdf
For each cuboid, you have the starting and ending R, G, and B values. So just compute the mean location like you would with anything. Loop over all values and sum the histogram times the value and divide by the sum of the histogram. Standard mean formula that you've seen a thousand times: meanX = sum(hist*x)/sum(hist). Try to code it up - it's not hard.
mona, what's the status? Did I help you solve this or are you still having problems? If so, what are they?
I am able to implement this and even further. It did serve the purpose but now I need to quantize the color followed by segmentation. Now I really want to confirm if I did the right thing from the start. So here it goes.
I divided L channel into 1.5 bins in each block, a and b channels into 3 bins in each block so that the colors within each bin are perceptually similar to its centroid. The centroid in this case is the mean of pixels within each bin in each channel. The centroid in each bin replaces the color pixels within the bin.
What do you think? I just wonder why the paper calls the bins as cuboids?

Sign in to comment.

can someone help me, how to extract the color (especially red) then do segmentation to get the pixel value of that color.
I attached a sample image

1 Comment

Please start your own, separate question for this rather than answering someone else's 8 year old question. In the meantime try the Color Thresholder app.

Sign in to comment.

Asked:

on 31 Dec 2014

Commented:

on 6 Jun 2022

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!