Trainable weka segmentation: morphology from probability map

weka
morpholibj
segmentation
Tags: #<Tag:0x00007fd547fbbed0> #<Tag:0x00007fd547fbb8b8> #<Tag:0x00007fd547fbb228>

#1

Dear @iarganda

I have trained a classifier on my training images and experimented with different features. I have then used the probability map from the classifier as input to the MorphoLibJ plugin to get morphometry data, using the Morphological segmentation option (and objects rather than borders option).

The goal is given an aerial image of many leaves, occluding one another, to count the number and area of each leaf in the image. I get very good edge identification results if I use features like Laplacian, variance, sobel, or variance. However, the problem I am seeing is that if I train the trainable weka segmentation on the raw RGB image with one or more of these features results in classification results that are very good but the downstream morphometry results are not so good, and involve too many parameter choices. This is independent of whether I use probability maps or classification results as input to the morpholibj.

As mentioned above, features like laplacian or sobel, or simple convolution, on raw input RGB results in very good edge detection. I tried to preprocess the image with (a laplacian say) filter and use the resulting image as input to the trainable weka segmentor, but as anticipated the preprocessing with filters loses the green color information that is important for the pixel level classifier. I have used FeatureJ to check which features results in good edge information on the raw data and then use just one of those features for the weka segmentor but the results are not that great.

Do you have a recommendation on how I can address this problem?

thank you in advance for your help

best

Peyman


#2

Hello @peyman!

Can you please provide an example to understand better the type of image you pass to the Morphological Segmentation plugin and the type of output you consider a bad result?


#3

Hi @iarganda

Sure… here is one

original image

laplace (sigma=1) that gives good edge detection

learnt probability map after applying the weka trainable with laplace and sobel features

Just to be clear, I am doing the following:

1- reading raw RGB images (first image above) into Trainable Weka Segmentor (TWS)
2- labeling the images with classes
3- selecting laplace feature with sigma min=1 sigma-max=4
4- training a classifier (a MLP)
5- getting the probability map after training
6- inputting the prob map to Morpholibj

I have also tried to

1- generating a laplace filter of the rgb image to pick up the edges (second image above)
2- made the above into a binary mask
3- done classic watershed with the image from above step above as the mask

It still produces way too many segmentations.

What I am trying to do is see how I can guide the classifier to preserve both color and edge information as features. As you can clearly see from the second image above a laplace filter nicely picks up the edges, but the probability map of the classifier appears to be losing this information even though I am using sobel/laplace features.

best


#4

I see what you mean now, thanks.

If you want TWS to work as a border detector, you have to trace the border of your objects. In your case, I would create 3 classes: leave borders, box borders and anything else. Have a look at how I did it:

Then I would take the probabilities of the leave borders, convert them to 8 bit (to speed up the morphological processing) and load them into the Morphological Segmentation plugin:


#5

hi @iarganda

thank you for the response. I have tried that strategy, with the difference that I only had 2 classes (‘leaf’ and ‘other’), and borders were member of Other class. The results were not that good. May I ask,

1- which features did you use for the above and what sigmas?

2- as you saw the laplacian/sobel filters do a good job finding the borders. So rather than going through expensive edge labeling exercise, i wrote a utility to post process a probability map given the binary mask of the edges detected by a (gaussian blur+sobel) filter from the original image. If pixel value at binary mask xi,yi == 0 then we keep the prob map value, else we set it to zero. does this make sense to you?

3- do you know if there is an implementation of a Conditional Random Field for postprocessing images in imagej?

Liang-Chieh, C., Papandreou, G., Kokkinos, I., Murphy, K., Yuille, A.: Semantic image
segmentation with deep convolutional nets and fully connected crfs. In: International Conference
on Learning Representations (ICLR). (2015)

Zheng, S., Jayasumana, S., Romera-Paredes, B., Vineet, V., Su, Z., Du, D., Huang, C., Torr,
P.H.: Conditional random fields as recurrent neural networks. IEEE International Conference
on Computer Vision (ICCV) (2015)

and

Parameter Learning and Convergent Inference for Dense Random Fields

thank you ignacio


#6

Here you are the settings I used:

How does the result look? You might also want to apply Canny and see if the borders are better.

Not that I know.


#7

@iarganda

How does the result look?

it looks reasonable, but not as good as the segmentation image you shared. May I ask one more question? Did you use the freehand tool to trace the edges? i havent used that bc the script recorder does not record freehand trace coordinates. In general, is there an interaction between how the traces are recorded (which selection tool is used) and the resulting classification?

You might also want to apply Canny and see if the borders are better.

will do.

thank you Ignacio


#8

Yes, I used the freehand tool. What you can do to get the recording is to use the “segmented line”, whose points are properly recorded :wink:

In general, every pixel that your trace will be used for training, so the result will not depend much on the selection tool you use but on how well you trace representative samples.


#9

@iarganda

Yes, I used the freehand tool. What you can do to get the recording is to use the “segmented line”, whose points are properly recorded

I have been using the segmented line tool. it gave me RSI :slight_smile:

In general, every pixel that your trace will be used for training, so the result will not depend much on the selection tool you use but on how well you trace representative samples.

understood. and, once more, thank you very much for all your assistance


#10

Dear @iarganda

I am back :slight_smile: With a new problem! I have been using your getLabelImage() successfully with 2 classes. But what I am noticing is when I have 3 or more classes then I can only see the the

Hi

	public static ImagePlus getLabelImage()
	{
		final ImageWindow iw = WindowManager.getCurrentImage().getWindow();
		if( iw instanceof CustomWindow )
		{
			final CustomWindow win = (CustomWindow) iw;
			final WekaSegmentation wekaSegmentation = win.getWekaSegmentation();

			final int numClasses = wekaSegmentation.getNumOfClasses();
			final int width = win.getTrainingImage().getWidth();
			final int height = win.getTrainingImage().getHeight();
			final int depth = win.getTrainingImage().getNSlices();
			final ImageStack labelStack;
			if( numClasses < 256)
				labelStack = ImageStack.create( width, height, depth, 8 );
			else if ( numClasses < 256 * 256 )
				labelStack = ImageStack.create( width, height, depth, 16 );
			else
				labelStack = ImageStack.create( width, height, depth, 32 );

			final ImagePlus labelImage = new ImagePlus( "Labels", labelStack );
			for( int i=0; i<depth; i++ )
			{
				labelImage.setSlice( i+1 );
				for( int j=0; j<numClasses; j++ )
				{
					 List<Roi> rois = wekaSegmentation.getExamples( j, i+1 );
					 for( final Roi r : rois )
					 {
						 final ImageProcessor ip = labelImage.getProcessor();
						 ip.setValue( j+1 );
						 if( r.isLine() )
						 {
							 ip.setLineWidth( Math.round( r.getStrokeWidth() ) );
							 ip.draw( r );
						 }
						 else
							 ip.fill( r );
					 }
				}
			}
			labelImage.setSlice( 1 );
			labelImage.setDisplayRange( 0, numClasses );
			return labelImage;
		}
		return null;
	}

#11

Hello @peyman,

It seems your question got cut in the middle. Can you please rephrase it? I made a test with 3 classes on my machine and everything seems to work as expected…


#12

Hi @iarganda

Apologize. I found the solution to the problem and thought I’ve deleted the message but it seems I had not. Please ignore.

thank you

Peyman