Sensitivity of Weka Segmentation Scripts / Input images

weka
segmentation
Tags: #<Tag:0x00007fd69bfb0af8> #<Tag:0x00007fd69bfb0940>

#1

Hi all,

this trainable Weka scripting is quite a nice tool and I’m really happy how fast one can produce quite reasonable segmentations. However, sometimes users would get an error ‘Could not apply Classifier’ and some strange errors (like OutOfBoundException in the log. Since I’m also experiencing that Weka scripts sometime happen to fail and sometimes works.

Recently, there appeared other topics here and here with similar errors.

So my first question is to you experienced guys out there: am I correct, that the source of the errors are usually the input images for training/testing? Are there any cross-influences with filters which are only compatible to certain image formats?

So, I experienced dependency on image size (pixel x pixel; maybe an memory issue), image depth (8-bit …32-bit) and binary/RGB and image format (e.g. png vs. tif, but not sure on that). However, I’m really just trying to read symptoms. Maybe some of you know more about the limitations? For me it’s quite a pitty and cumbersome to find out what works and not, because I got different batches of images (with different formats and settings e.g.), but basically with similar image content/structures. Also I couldn’t find out a good strategy to convert 'error’images to 'working’images.

Maybe it would also be a good opportunity and try to collect all the experiences with the script in this new topic - together with you guys! So one could put that on the wiki later on.

Cheers,
Chris


#2

Hello Chris :slight_smile:

In general there is a limit to how much you can expect a plugin to understand. Although I did not know image format mattered, unless it features lossy compression like JPEG… or compression at all.

The obvious solution is to keep these factors in mind when acquiring images. Use the same settings, bit depth, image size etc. But of course it is not always that easy. I ran into this bit depth sensitivty myself when a colleague wanted me to run his images that were sometimes 8bit, sometimes 16bit…

But there is a way around this issue. You can scale the image to a different bit depth. Divide the whole image by the highest value pixel, change image type and multiply to new range ie. 65536 for 16bit with ‘scale when converting’ disabled.

Or, if you have multichannel images like I do you can create composite rgbs and segment these rather than single channels. But in my case, all three channels feature fluorescent proteins that localize ubiquitously…

I am not sure what is the best practice for different sizes or zooms, maybe @iarganda has a clever solution.

edit, @cbe: At my office now. And I think this is an important question, these things are not so obvious to new users and maybe the main issues can be covered in the troubleshooting section of the wiki, along with some code structure that can adapt to, or handle errors from different inputs. (I can look into this myself in June if people agree, the simpler issues shouldn’t be much work to cover).

Sverre


#3

Hello @cbe,

Apart from @Sverre’s useful recommendations, here you are few comments about using Trainable Weka Segmentation from both the plugin and the library methods:

  • When working with large images, be aware of the fact that using many different features will significantly increase your machine’s memory consumption. Basically, every feature is a 32-bit version of the original image that needs to be stored in RAM. Therefore, try to keep the number of features under control, limited to the set of most informative features for your specific problem.

  • The exceptions you mentioned are usually related to either wrong input images or to a previous memory problem. If you train a classifier on a specific image type, it will only work on images of that type unless the range of intensity values are preserved (as pointed out by Sverre). If you trained your model using an RGB image you cannot expect it to work for 8-bit images.

  • For a good classifier to work on new images, it is always a good practice to normalize as much as possible both the training and the test images.

You have a few more recommendations in the supplementary material (section 3.5) of our 2017 Bioinformatics paper.

What is different between the “error” and the “working” images? Are they the same type and size?


#4

Hi again and thanks or your useful comments, @iarganda and @Sverre :slight_smile:

I was able to run the script successfully on the same day and could track the errors back to a bad mix of different image depths and rgb/grayscale for training/testing. Converting some images worked out for me. Both .tif and .png and also a mix of both worked out well, so all good here.

I will keep that in mind with the memory issue related to the amount/selection of features!

Cheers,
Chris