Segmentation settings for TWS

weka
Tags: #<Tag:0x00007fd546ebe1f0>

#1

Hi!

I am new to imageJ and I would like to use the (amazing) Trainable Weka Segmentation plugin to quantifiy the total area of red and green pixels, compared to the background in each picture (similar to this example Quantifing Weka Output).

My red and green pixels are sediment particles covered with fluorescent pigment, photographed under UV light.

I managed to train the classifier on a stack of croped images by choosing “Gaussian blur” and “Difference of gaussians” but it’s quite difficult to understand what all the segmentation settings do (also after reading the wiki pages http://imagej.net/Trainable_Weka_Segmentation).

Can someone recommend improved sementation settings for my pictures? Here https://www.dropbox.com/sh/8ffai020jppk1s9/AADhcVBgsxUOyn6qdsQj77GEa?dl=0 are examples and segmentation results of the first three pictures. I also added the classifyer.

Thanks for any advice!


#2

Hello @JJerney and welcome to the ImageJ forum!

I think the “Gaussian blur” and “Difference of gaussians” features (together with the implicit HSB features of the color images) should be enough to get a proper segmentation. In my opinion, the key is to trace enough samples that are between red and green, green and blue, etc. This is an example of a segmentation I tried using the same features as you did and a cropped version of one of your images:

Result:

What do you think?


#3

Hi iarganda!

Thanks for your support and sorry for my delayed reply.

I think your result is quite good, but I would like to get a better result for example for the red areas in the top right corner of your classified image. Can I improve my classifier by further training? I am afraid to overtrain the classifier. How do I know if I trained enough or already too much?

Thanks!


#4

I guess you could observe the Out of bag error reported in the Log window when training: if it decreases after adding a new trace and retraining, the classifier is still improving. If the error increases, this might indicate some degree of “over-training”…


#5

As @imagejan said, the out of bag error is a good indicator of your classifier performance on samples that are similar to your training ones. Of course, you won’t have an idea of the performance on samples that the classifier has never seen and are very different from your training samples.

In general, it is a good idea to feed the classifier with representative samples of all the the type of pixels you might encounter on your images.

As a rule of thumb, during the interactive process of adding new samples and retraining

  • If the out of bag error decreases, you are going in the right direction and the new samples you added are helping. Therefore, continue adding even more samples.
  • If the out of bag error increases, do not worry, it might be because
    1. the new samples were added erroneously to a class (unlikely if you are being careful),
    2. the new samples were unseen and have increased the complexity of the classification, so continue adding and training to try to reduce the error.
  • If the out of bag error stays the same, the samples you added didn’t make a difference and most probably you can stop training.

#6

Thanks! Checking the Out of bag error helped to get a good result for pictures dominated by green pixels with few red ones.
But as soon as I want to apply my well trained classifier to a new picture (dominated by red with only few or no green pixels) I get wrong classification results. If I continue training the new red dominated picture I can’t use the classifier for green dominated pictures any more - why is this? And how can I solve this problem?


#7

That’s probably because the pixels in the new image are completely new to the trained classifier so it does not perform well.

Because as soon as you continue training the old classifier gets erased and it only learns from your new samples. Take into account that every time you click on “Train classifier” you create it from scratch.

You have two options:

  1. When you are happy with the results in the first image, click on “Save data” and store the trace feature information into an ARFF file. Then open your new image and load the ARFF file by clicking on “Load data”. That way the classifier will take into account the old plus the new traces you introduce.
  2. Open the plugin with both images in a stack, so you can train at the same time using pixels of both images.

Macro for Weka segmentation and quantifying output
#8

Excellent! Thanks a lot! Now I have a really good classifier for all my images.