Extracting features from a hyperspectral image

Hi, could anyone please recommend a step-by-step tutorial for an absolute beginner to go from image to data? I have hundreds of NIR images and want to extract data out to start the modelling etc., but having no experience with imaging data I am clueless. I assume there would be guided tutorials with instructions on how to go from .raw and .hdr to the stage where you have numbers in an output file to start testing. Any recommendataions are greatly appreciated.

 Accepted Answer

You're asking too much. We can't give you a lesson-by-lesson course on hyperspectral imaging. Try searching online for one.
However to get started with the basics of image processing in MATLAB, see my Image Segmentation Tutorial in my File Exchange:
It's a generic, general purpose demo of how to threshold an image to find blobs, and then measure things about the blobs, and extract certain blobs based on their areas or diameters.

14 Comments

Thank you for your help, I will practice following your tutorials and see where it takes me. Sorry if it sounded too much but as I am new to this field and online searches returned hundreds of different tutorials but it was confusing to decide a good starting point. Also the purpose for me was only to extract data out of the images I have so I can continue with rest of the experiments (mostly plant genetics). Thank you again for your suggestions. If you have any good reading suggestions even involving other platforms (like python, R etc.) please do recommend. Cheers!
I'd recommend staying with MATLAB. I took a week long Python class and didn't see any reason for me to move to that language. MATLAB could do everything I need to do, and was easier to do it in. Python is very fussy about indenting and having to have the right files "imported" and you don't have to worry about that in MATLAB.
I don't know what you want to extract -- you didn't upload an image and tell me so I can't make any suggestions.
Sorry I didn't want to sound too demanding, but here is a screenshot (HSI image) of an image from hyperspectral viewer [uploading the actual file is difficult as it is >250 Mb in *.dat format]. Each image has 12 grains (however this only shows 10, an old file) in it and I want to extract features/absorbance of each grain separately, and the total images are over 500. The grain were placed on a black background. The second screenshot (HSI file types) shows the type of files I have in each folder. Any suggestions would be greatly appreciated.
Is the background a spectrally flat white reference? Is there a wavelength that the grains all show up in? If so, use that for segmenting (finding the grain pixels as a binary image mask). Then you can get the average reflectance for each grain in that wavelength. Then you can use the mask to get the reflectance at each of the other wavelengths.
Thanks again. The background was black as they were put on a black tray; however, I have white references as well. And the grains all show up around 110-125 (out of 256 bands) but different for different grains.
So I'd like I said, create a mask by scanning through all 256 images thresholding just above the black background and using bwareaopen to take only blobs as big as, say half a grain. Then OR all the masks together to create a master mask with all grains in it. Then you can scan through again getting the mean intensity of all grains with regionprops
Hello again! Before creating a mask do I need to calibrate the image? I only have white reference - the company that did imaging for me said this in their report: "The camera shutter was automatically closed for 1 s at the end of each scan and approximately 100 frames were also recorded for a white PTFE reference material with approximately 100 % reflectance across the entire measured spectral range (white reference)." Does this mean I only need white reference for calibration? If yes, could you please advise on that?
To compensate for possible exposure differences and spectral responsivity differences in the different bands I would measure the PTFE gray levels in each image. Then I would find the brightest value and normalize all images to that brightest value so then we can say that the spectra are true. Then do as I said.
Thank you, I will search into that. Meanwhile could you please comment on this - if I can generate a mask like the one attached, does that suggest it has picked the targets correctly and the spectra can be trusted?
Yes except for the one in the lower left corner. If you know none of the blobs should touch the edge of the image, you can use imclearborder to get rid of them.
Thank you, that is a marker I placed to number the grains. Another question I wanted to ask was what should be the basis of choosing a segmentation method? I see there are so many in literature and many new keep coming up in papers, but I was wondering if one works for me well and as you can see the image does not have too many elements and it is easier to detect grains, do I even need to bother looking at difference of various methods (unless from knowledge perspective) i.e., if I can segment my grain by whichever method does that make any difference?
Well that's the art of image analysis. You'll learn from experience. Basically you craft an algorithm to process your image until you can get to a point where you can threshold your image into things/regions you're interested in, and those you're not (background). If you're lucky you can threshold the image right away. If not you need to use your experience to think up an algorithm to get to a point where you can threshold. Hint: almost never is the first step edge detection or a global contrast adjustment (or histogram equalization) even though that's what novices usually try first.
No it does not matter what method you use as long as it works and gives you what you want. Sure, it may not be efficient or may contain unneeded operations or may be a different algorithm than I (with over 40 years of image processing experience) would have developed, but as long as it gives you something you can use in the end, that's all that matters.
Thank you for explaining that so clearly, and in fact for all of the answers - these have been very helpful. And this is the kind of answer I was hoping for, as "plant breeding and genetics" being the focus of my PhD I need to be careful how much time I can assign to learning image analysis as my main aim is to get the desired features out and move on to the more relevant questions of my project. But this surely is a very exciting data to play with - can spend hours on computer without realising how quickly time is passing.

Sign in to comment.

More Answers (0)

Categories

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!