Normalization Of color spaces

40 views (last 30 days)
I have written a code for color space conversion from RGB to HSV,Lab,YCbCr,XYZ,CMYK and YUV. Now in next step I want to normalize each band of color space. How can I do it?

Accepted Answer

Image Analyst
Image Analyst on 17 Jun 2015
Use mat2gray to normalize any array to the range 0-1. For example:
H = rand(4,3) % Sample data for Hue image
normH = mat2gray(H)
In the command window, notice how normH goes from 0-1.
H =
0.6787 0.6555 0.2769
0.7577 0.1712 0.0462
0.7431 0.7060 0.0971
0.3922 0.0318 0.8235
normH =
0.8172 0.7878 0.3096
0.9170 0.1760 0.0181
0.8985 0.8517 0.0825
0.4553 0 1.0000

More Answers (2)

Christiaan
Christiaan on 17 Jun 2015
Dear Sir,
If I understand you correctly, you want to normalise a color map. Therefore you may could use this code (as an example):
clc;clear all;close all;
c_map = colormap(winter); close;
x = 0.9; % color map is normalised from x = 0 to 1
c_map_picked = c_map(round((length(c_map)-1)*x+1),:)
x = linspace(0,2*pi,100);
y = sin(x);
line(x,y,'Color',c_map_picked); hold on;
Kind regards, Christiaan

DGM
DGM on 15 Nov 2022
Edited: DGM on 15 Nov 2022
I know everyone just recommends mat2gray(), but I cannot see why that's universally appropriate; I certainly can't see it as appropriate for "normalizing a color space" -- whatever that means.
The question that should follow from "I want to normalize x" is "normalize it with respect to what?". If you just have a grayscale image and want to stretch the contrast to fill available dynamic range, then normalizing to the extrema of the data makes sense. If you have something else -- most anything else -- normalizing to channel extrema seems questionable.
To start, HSV as returned by rgb2hsv() is already normalized. But wait! Not all tools return H in the range of [0 1]; some return H in degrees!. Yes. How do you normalize it then? Do you use mat2gray() or normalize()?
RGB = imread('underwaterimg1.jpg');
[H S V] = imsplit(rgb2hsv(RGB));
% the extrema of the image channels
[imrange(H); imrange(S); imrange(V)]
ans = 3×2
0.4032 0.5634 0.2055 1.0000 0.3686 1.0000
H = mat2gray(H); % normalize H to extrema
RGBn = im2uint8(hsv2rgb(cat(3,H,S,V)));
% now it's unrepairable garbage
imshow([RGB RGBn])
No. You divide by 360, because blindly scaling H will tend to ruin the image. If you don't know what the extrema used to be before you normalized, the damage cannot be undone.
Okay, but what about everything other than hue? Blindly normalizing chroma channels to their extrema in any opponent model (YCbCr, YUV, LAB) will also shift colors. Hues will change. Neutral grays may become saturated. Saturated colors may become gray. Highlights and shadows will often be muted, reducing contrast.
Normalized YCbCr
Why would that be intended? Does it even serve any purpose to stretch/offset chroma/saturation and cause gross hue shifts? That brings us to the real question. Why are we normalizing anything?
I think the answer falls into a handful of categories.
  1. my image is low contrast and I want more contrast
  2. I didn't pay attention when casting and now I have an improperly-scaled image that won't work with anything
  3. I converted to LAB and these numbers are big and negative and I'm confused.
  4. I converted to YCbCr but I don't want integers
  5. I actually want to transform the image data to unit-scale in a manner that is reversible and consistent between images
If you want to adjust contrast in something other than RGB, operate on the lightness component of the model. Depending on the color space, you will have complications, but at least it makes more sense than blindly scaling the entire array.
For everything else, it's important to preserve the relative scale of an image with respect to some nominal limiting values. If you don't, you're losing information about the scale of the image. This makes the process irreversible and inconsistent between images. Normalizing with respect to extrema would be bad.
So what do you normalize the channels to? Disregarding the question of whether you should, the answer varies. Depending on intent, some cases are simple. Others are very difficult.
If your images are improperly-scaled, rescale them according to the scale that they currently have, not their extrema.
Normalizing YUV is fairly simple, though normalizing YUV is just as ridiculous as any of this. The limits are Y=[0 1]; U=[-0.436 0.436]; V=[-0.615 0.615]; Note that U and V are zero-centered. If you're wondering why your YUV image has values hundreds of times larger than that, that's because it's not YUV.
Normalizing YCbCr should be simple too. It's defined for integer-class data, so you could just normalize by the nominal range of the class in use (e.g. [0 255] for uint8, Cb,Cr centered on 128). ... or you could just read the documentation for rgb2ycbcr() and see that it will already give it to you in unit-scale.
What about LAB? Well LAB isn't constrained to the extent of sRGB, so how you want to "normalize" depends on what you want. The projection of sRGB into LAB occupies a volume that occupies but does not fill L=[0 100]; A=[-86 98]; B=[-107 95]; (roughly). That's only the extreme axis range for the entire sRGB gamut. The extent of the projection varies between color points. If you were crazy, you could normalize the image to the gamut extents at each color. That's not simple. It can be done, but those are very uncommon goals.
In reality, this issue of constraint exists for YUV, YCbCr, and everything else. If you are assuming that your image data can be freely moved around inside a cube in any given colorspace without causing problems, you'd be wrong. That's true for basically everything other than RGB, HSV, and HSL.
Bear in mind that this objection comes from a guy who has implemented chroma-normalized colorspace conversion tools and wrote an entire toolbox around intentionally turning images into garish colored garbage. When I ask "why would you do that", I understand some negligible degree to which the question might not be rhetorical.
I mentioned that maybe someone just wants to maximize contrast to the available range. It's a simplistic motive. It's as simplistic as wanting to grab the slider in photoshop and slam saturation. Maybe you don't want to normalize saturation so much as you're trying to maximize it. Note that when I gave axis limits, I gave the centers as well. Any rescaling of chroma information must retain the symmetry of the distribution around the neutral axis, otherwise you're distorting things. Even still, this sort of adjustment isn't typically done in rectangular coordinates on AB, UV, or CbCr. It's done in cylindrical coordinates on C, but nobody mentioned those conversions, so I have to doubt that's what anyone is asking for.
Consider the underwater image from before. Let's say we wanted to maximize the chroma information in LAB. This is where the color points lie. Bear in mind that this is a 2D projection.
If we did the naive thing and just stretched everything to fit within the limits given, this is where the colors would go.
You end up with colors way over toward yellow and magenta. In other words, we're distorting hue information. But I mentioned that it's important to preserve the central neutral axis, so colors really shouldn't be crossing the A=0 and B=0 axes.
It's still causing problems. Almost everything is out of gamut now. Where will those colors be mapped when converted back to RGB?
What if we stretch everything radially like I suggested? We can find the maximum chroma and scale to that.
Hmm. It's almost like this is a bad idea no matter how you do it. Maybe if you look closely, you might notice that those color points were originally all at the gamut boundary to begin with. There's really nowhere to stretch them toward other than the neutral axis. Could you stretch those points between the neutral axis and the local extents of the gamut?
Yes, but it's not simple. MIMT imlnc() can do it, but how it's done is beyond the scope of this conversation, and the utility is still questionable. Even as questionable as this least-bad example may remain, it's so far removed from the topic of naive channel "normalization" that I hope you can see why I would have to ask why.
  1 Comment
Image Analyst
Image Analyst on 15 Nov 2022
I agree. A lot of times people ask for something that is not really needed.
like when they call histeq or imadjust during image segmentation when it doesn't help at all. Sometimes they just want to see the image better and think that if they can see it easier, it will be easier to segment, which we all (or most of us) know doesn't help.

Sign in to comment.

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!