Automatically selecting n areas per picture and measuring colour (or brightness) therein

Tags: #<Tag:0x00007fd5499284e0> #<Tag:0x00007fd549927e28> #<Tag:0x00007fd549925f10> #<Tag:0x00007fd549924f48>



I would like to measure color (or maybe just average brightness if this is more reliable, since I’m primarily interested in dark pigmentation) in eight samples per image, in a series of images. All samples are on a white background that should be relatively easy to extract with a threshold filter. Here is a sample image:

How should I proceed to automate the following:

  1. Isolate the eight distinct areas in each image (do I need a prior threshold filter or would that work with the current white background?)
  2. Get mean/min/max/standard deviation for color within each area (RGB value, or value along a lightness-darkness relative scale on images converted to Luminance?)?
  3. Export these measurements into a .csv or similar output file, while making sure that areas are always exported in the same order (from top left to bottom right, since it’s always two rows of four samples, but I wouldn’t mind numbering them by hand if it’s necessary)

I suppose I would need Segmentation for step 1, but I have never tried this plugin and have no experience with ImageJ/Fiji whatsoever, and I am not sure how to proceed for steps 2 and 3. I have a couple ideas for step 2 but I prefer asking before exposing them because I am not sure they would be optimal.

Thank you in advance.



You are on the right track - you need to first segment your worms. I gave it a quick try with Trainable Weka Segmentation just using default settings and got some good results:

I generated this probability map:

Which I then used to threshold/create binary mask:

From there - you can use Analyze Particles and then overlay those ROIs on your original image to measure the regions within (Multi-Measure and Set Measurements):

There are some straightforward workshops you can use to get you started - just click here.

A word of caution though… your sampling is perhaps not the best for such measurements. Zooming in to your image:

Your resolution is quite poor… you might consider re-acquiring these images at a higher resolution. A good place to start learning more on this topic is the Principles page of the ImageJ wiki.

Hope this helps at least get you started… If you have more questions - post them here. There might be others who have more insightful advice to give you…

eta :slight_smile:


Thanks a lot for the detailed and illustrated answer @etarena.

I have tried Trainable Weka Segmentation quickly when initially opening this thread, and tried again now as you recommend this plugin, but it could not finish training because it ran out of memory with my sample images (“java.lang.OutOfMemoryError”). They have a higher resolution than the example I posted here (your concern was relevant, but this image was just for posting here), and I thought 4 GB of RAM dedicated to Fiji in the options would be enough, but turns out I have to downscale the images to get the plugin to finish training.

Is there any other workaround I could try without downscaling the sample images? I suppose I could crop around each caterpillar to get only one sample per picture, lower resolution per picture, and just eight times more pictures in total, but that would be time-consuming without automation, and automation cannot be achieved before training segmentation.

After that segmentation on a test low resolution picture, I followed the steps you described but could not get to the RGB picture with the 1-8 labels. I’ll read more thoroughly the help of the Analyze Particles and I am currently checking the workshop you linked, I’m sure I just skipped something or did something wrong.

Once I manage to get the thing running from start to end, is there a way I can automate it on an image sequence (except the segmentation training and threshold which will probably yield better results with some manual input)?

Thank you again for your answer and helpful links.



For the memory issue… you can increase the amount of memory available for ImageJ. You can do this from Edit > Options > Memory & Threads.

Within TWS - another option is to use less features. Try selecting only Gaussian and optionally Difference of Gaussian. Another option is to split your large images into smaller images, process them, and merge them back together.

for this - you just need to open the original image and toggle the ‘Show all’ option in the ROI Manager window. Then you have them overlaid… (check out the Segmentation workshop linked below)

And - yes - you can automate this whole thing in a macro script. TWS is macro-recordable. Here are some helpful links for scripting, as well as general ImageJ/segmentation links:

Hope this helps get you started! And post again if you have more questions…

eta :slight_smile:


Thanks a lot @etarena.

So I’ve created a first script that removes any potential light gradient by subtracting the background so that all individuals are evenly exposed (they should be already considering the lighting conditions, but there may still be potential biases), it then clips the highlights a little so that the remaining noise in the background is removed and contains only white pixels, after what the auto-crop function works well. This gives me BW pictures in a lower resolution (since the unecessary empty background is cropped out) on which the Weka segmentation can work more easily.

I am now trying to create another script for the segmentation and measurements themselves. I have been able to reproduce what you showed, but am facing a couple issues to add some automation in it:

  • First, I trained the segmentation on a sample picture and saved classifiers as well as data. However, if I try to reuse these files on another picture (or on the same) to segment without having to train again, after a couple minutes of heavy load on the CPU, I get a ton of Java errors and exceptions when requesting the probability overlay. Yet I would need to be able to load the segmentation data, otherwise it means that I have to train the segmentation for every new picture, which would make the workflow a lot heaver and at a same time make the measurements less repeatable (the biggest issue).

  • This brings another question though: what are my options if I finally manage to solve this first issue and load the segmentation data, but the segmentation is not working perfectly on all sample pictures? My goal is to measure the average color (or darkness if easier) of each caterpillar, which means I have caterpillars of variable color, randomly photographed on the pictures, and ultimately the segmentation trained on one picture could be inappropriate on another if the caterpillars are of very contrasted coloration. Could I work around this by visually selecting a set of the most extreme colorations I get, put them in a same extra picture, and train the segmentation once and for all based on that particular picture with the highest variation possible?

  • When overlaying the ROI on the base picture, I noticed that the ROI are not always numbered in the same order. I thought the order would be based on coordinates of each region on the pictures, 1 for lowest x and y, n for highest x and y, but the ordering is based on something else (I am guessing the first ROI to be identified because it contrasts best on the background is numbered 1, and so on). This means that Measurement results are ordered differently among pictures, and this is an issue for me because each caterpillar already has an identity in the rest of my dataset. Can I force the ordering based on coordinates? Otherwise I suppose I have no better solution than (1) manually reordering the results when I export then, or (2) splitting all my images in eight pictures, each with a single individual.



Ok… let’s see…

I am not sure what is causing these errors… perhaps @iarganda can help you in this particular issue?

You should be training on multiple images - so yes, on the ‘extremes’ of coloration… so you will need to load the classifier, re-train and save again - across many of your images. Did you acquire all the images in the same way at least (this is obviously essential for the analysis you want to do regardless)? If the background remains the same and lighting and whatnot - it will work - though it might not be 100% correct in all cases… that error you may have to live with for reproducibility’s sake.

I do not know the answer to this at the moment… will need to do some deeper investigating… you should be able to rename/sort ROIs - so take a look at some of the Built-in Macro Functions, as well as looping statements.

For this last question - post another, separate question on the Forum specifically for this… there have been previous discussions here on the Forum regarding sorting of ROIs (this one might help you in particular) - so someone may have a straightforward solution to this - plus you can search the Forum for other previous discussions…

Hope this helps a little at least…



This is very strange. Can you send me the classifier and image files so I can try to reproduce that error?

Also, can you post there the exceptions you get?


I’m not exactly sure how to proceed for that in ImageJ. How can I train the plugin on several distinct images without resetting it all the time? By saving the data after one picture, then loading it before continuing training on the next picture?

All pictures were taken in the exact same conditions and lighting settings yes, so the background should be pretty similar among pictures, but it would still be best to use more than one picture for the training because the samples themselves have different colorations and they were sampled randomly for pictures.

Thanks for the information about ROIs. I’ll investigate that as soon as possible. Hopefully this Sort button in the ROI Manager may be a good solution, I hadn’t noticed it.

Sure, you’ll find the classifier file and a sample image here: I’ve included both a color and a grayscale images after removal of any background gradient, but I was using the latter only. The errors I got were very spammy so I pasted them there instead of posting directly here.

Thank you for your help.



Essentially - yes. You just need to train your classifier… save it. apply it to another image. re-train. save again… and so on until you are happy with your segmentation. If you do this for a few varying samples - should be robust enough to then just go ahead an apply to all of your images.

eta :slight_smile:


OK, the error is simple then. You trained your classifier on a color (RGB) image but applied it to a grayscale one. That’s not possible. I will try to add an error check in the plugin GUI to show an error message when that occurs.


Oh, now from your last two messages I understand that the classifier file is more than what I thought. I thought it just corresponded to the segmentation settings and category names, and that the data file was the actual learning material. This is why I reused the classifier from my first attempts on color pictures, I thought it would just be the settings and category labels… Sorry I haven’t investigated that before.

Am I right that the data is just the traces then, and that loading them will allow restart the training, but not resume what has already been trained before?


The data contains the feature vectors associated with the traces, so it can be used to train a new classifier from scratch and in combination with new traces done on a new image.