Weka 3D - How is the depth of a ROI drawn in 2D defined

weka
roi
Tags: #<Tag:0x00007fa307b0e9e0> #<Tag:0x00007fa307b0e760>

#1

I was trying to use Weka Segmentation 3D and was wondering how a 3D ROI is constructed (and handed over as a trace to Weka) from a ROI drawn with imagej’s standard ROI tools. Adding a new class member just takes the active frame but what about the adjacent frames that contain the structure that I wanted to grasp?


#2

Hello @drchrisch and welcome to the ImageJ forum!

The ROIs you select in the Trainable Weka Segmentation 3D plugin are only 2D, which means they will include only the voxels contained in the ROI in the visible slice.


#3

Well, now what then is meaning of ‘3D’?
How can I make clear that I want to, e.g., define a spherical substructure that extends throughout only a few frames and distinguish that from a tubular structure seen along the long axis?


#4

For that you will have to go slice by slice defining your ROI as consecutive 2D ROIs. There are no 3D ROIs in ImageJ, so the only option is to use the 2D planes.


#5

You do not need to trace/label the entire structure that you want to have classified. In order to train a classifier for Trainable Weka Segmentation, it’s sufficient to give just a “few” representative example pixels. You can use the 3D stack to determine on your own whether the structure is spherical or tubular, then draw a (2D) ROI somewhere in it and add it to the respective class. When you train the classifier, the plugin will calculate the 3D image features per voxel and deduce (learn) from your traced labels as examples how to classify all the other voxels in the image.

It’s a common misconception that you’d need a “full” ground truth for training the classifier. Just start with a few sparse labels, and refine them after an initial training where you see the prediction is still wrong.

Does that make sense?


#6

Thanks imagejan, and yes, that makes sense (and now also for the term ‘Weka 3D’).
In calculating voxel based image features, I could imagine that sigma values should be kept within a certain range. Is there kind of general suggestion for a typical range of values, and should one take into consideration the true size of the objects to become classified?


#7

That’s correct. The sigma values are in general the radius of the 3D filters used.

You can think of sigma as the radius of the “field of view” of the features you are using. If you need a voxel to be classified by the information contained N voxels away, then you should make sure sigma N is used.