Inconsistent voxel size handling in Trainable Weka feature generation

Tags: #<Tag:0x00007fb87c2481f0> #<Tag:0x00007fb87c247f70> #<Tag:0x00007fb87c247de0>


The interpretation of the sigma parameter appears to be inconsistent between different 3D features. In particular, the ImageJ filter features cannot take unequal x/y/z scales into account. Am I missing something? Is there a good workaround?

I have only looked carefully at the Mean and Hessian features, but I assume that the other features using ImageJ filters are similar to Mean, while the features from ImageScience/FeatureJ are similar to Hessian. Note I am working strictly in 3D.

My observations were:

  1. For Mean, sigma is interpreted as a number of voxels. When using the Mean 3D filter directly, separate x/y/z scales can be specified, but this is not possible within the Trainable Weka GUI, so there is no proper handling of non-isotropic image stacks.
  2. For the Hessian features, sigma is interpreted as a real distance, and the voxel size specified in the image properties is taken into account. This allows for isotropic features to be calculated (to the extent possible) for a non-isotropic stack. This is good. (Although it doesn’t seem to be the case using these features directly via FeatureJ; instead the sigma parameter is interpreted in voxels, and there is no scope to specify separate values for x/y/z.)

As a result, when correct voxel dimensions are specified in the image properties, it may be impossible to use both ImageJ filter and ImageScience features. For example, I have voxels of approximately 100x100x200nm. If Sigma = 4, Mean is useful but Hessians are all zero. If Sigma = 400, Hessians are useful but the system crashes trying to calculate Mean, which is anyway useless. I can obviously work around this by not specifying voxel size in the image properties, and just working in terms of voxels. But I am stuck with the fact that Mean etc cannot take unequal x/y/z scales into account. The only workaround I can think of is to calculate features outside the GUI.



Hello @jlefevre and welcome to the ImageJ forum!

First of all, sorry for the huge delay on answering you. This post somehow escaped my radar.

Thanks for pointing out these inconsistencies in the 3D features of the Trainable Weka Segmentation plugin. This plugin and library is in continuous development so it is thanks to users like you that it can be continuously improved.

Now, what do you think would be simpler for the user? To include 3 different sigmas for x/y/z or to keep the single sigma but adjust its value depending on the image properties as we are doing with the ImageScience features? In my opinion, the second option is simpler to implement and more transparent to the casual user, but a comment should be added in the documentation so everybody is aware of that behavior when using anisotropic data.

Looking forward to hearing your comments,


Thanks @iarganda, I’m so sorry I didn’t reply to this earlier, I went on leave and the notification email somehow never made it to me.

I agree that your second suggestion is best, so that the ImageJ filters work in the same way the ImageScience features do now. A note to clarify this in the interface and/or documentation would be useful, since the difference in setup when calculating these features directly could be confusing.

With this system, I can’t think of any reason why anyone would want to have different x/y/z sigmas or to work in terms of voxels, although my experience is limited so I suppose it is possible.

Sorry again for the horribly late reply, and thank you for your work on this lovely system.


Dear @jlefevre,

I believe I have implemented a reasonable and transparent-to-the-user solution. I have adapted the code so all the 3D sigmas adjust their shape based on the input image calibration. This includes the ImageScience features as well. In this solution, the size of the sigmas introduced by the user is in voxel units. Therefore, if you use a voxel size of (let’s say) 1.0 x 1.0 x 3.0 microns, a sigma of 4 would mean a sigma of 4 x 4 x 4/3.

If you agree with this solution, I will push my changes to the master branch of the plugin code and make a new release.



If I understand correctly, with this patch the ImageJ filters will work exactly the same way as the ImageScience features already do in Trainable Weka.
This seems perfect - thank you!


That’s the idea, so we can keep the sigma units as they are in TWS and play with the calibration of the image if we need something else. I’ll make the release then. Please, let me know if you find any error and thanks again for reporting this bug!


I have just made a new release and updated the documentation accordingly. Please update Fiji so you get the latest version of the plugin.


Sorry Ignacio, either I am very confused or there is something wrong here. The x/y/z dimensions in the image properties appear to be interpreted in a relative way.

I mostly tested the Hessian and Mean features (one feature from each group). Setting the voxel size to 1/1/1 seems to give exactly the same results as 2/2/2. I can get the same mean using the 3d filter directly, and the same hessians using ImageScience.computeHessianImages with voxel size 1/1/1.

I then tested unequal scales by looking at voxel size 1/1/2 and 2/2/4. Again these gave the same mean and hessian results as each other in Trainable Weka. However, they weren’t the same results I got using 1/1/1 and 2/2/2. I could reproduce Mean_1.0 using the 3D Mean filter directly with parameters 1/1/0.5 (so the relative proportions are interpreted correctly). But I haven’t been able to make sense of the Hessian results, and I can’t reproduce any of them using ImageScience.computeHessianImages.

Also, I found that while ImageScience.computeHessianImages using voxel size 1/1/1 and sigma=1 gives the same first hessian as voxel size 2/2/2 and sigma=2, voxel size 1/1/2 and sigma=1 does not give the same first hessian as voxel size 2/2/4 and sigma=2. So the scaling property doesn’t seem to work properly with an anisotropic stack (obviously this issue is about the ImageScience API packaged with Trainable Weka, not the GUI).

Any ideas?


Yes, that was my idea, I’m sorry if I didn’t make myself clear. The sigmas are in voxel units but adjusted to be of the same real unit size. Therefore a sigma of 1 will be the same for a voxel size of 1x1x1 and 2x2x2.

No because the sigma size gets adjusted to be isotropic in real units. If the voxel size is 1x1x2 (or 2x2x4) the applied sigma will be 1x1x0.5. Does it make sense now?

Do you mean this code? Did you play with the “absolute” parameter?


Thanks Ignacio, that does clear things up. My understanding now is that for voxel size x/y/z and sigma s, the applied sigma will be (s,sx/y,sx/z). As you say, this adjusts for unequal scales and if necessary the image properties can be manipulated to customise the sigma in 3 dimensions (I haven’t actually tested for different x and y scales since it is irrelevant for my data).

The aspect that is still giving me difficulty is the ImageScience features with unequal scales, specifically the Hessian eigenvalues. What I’d like to know is if I generate the Hessians in the Trainable Weka GUI, what call to the ImageScience API is necessary to get the same result? Previously I could just give it the same sigma (with absolute==true). This still works with isotropic images (accounting for the difference between pixel scale and the specified units), but I can’t reproduce the output for voxel size 1/1/2 using the API. Can you shed any light on this?

By the way, what I said before about the ImageScience API not having the expected scaling features was wrong, that was my mix up. Using the API, 1/1/2 with sigma=1 gives the same result as 2/2/4 with sigma=2. The problem is I would expect to get the same result using the GUI with 1/1/2 and sigma=1, and I don’t.

Cheers, James



For the ImageScience features I had to use a trick and set the calibration (voxel size) of the input image to the scale factor of each dimension. This way I could have a consistent behavior between all features. I hope it makes sense now!


Sorry, I guess I should have looked up the FeatureStack3D source myself and saved some back and forth. I can now reproduce the Hessians produced by the TrainableWeka 3D GUI for voxel size (1,1,2) by resetting the voxel size to (1,1,0.5) then calling the ImageScience API, which is consistent with my understanding of the code.

Unfortunately this doesn’t seem to be quite right, unless my understanding of the ImageScience API is incorrect. I think for the ImageScience features, the pixel dimensions need to be divided by the scaleFactor instead of multiplied by it.

Say we have voxel size (x,y,z) and sigma s. We want the applied sigma in pixels to be (s,sx/y,sx/z). If I understand correctly, ImageScience will convert to the units specified in the image properties, by using an applied sigma of (s/x,s/y,s/z). The scale factors your code calculates are (1,x/y,x/z). If we set the pixel dimensions to the scale factors then ImageScience will apply a sigma of (s/1,s/(x/y),s/(x/z)) = (s,sy/x,sz/x). If instead we set the pixel dimensions to the reciprocal of the scale factors, that is (1,y/x,z/x), then ImageScience will apply a sigma of (s/1,s/(y/x),s/(z/x)) = (s,sx/y,sx/z), which is what we want.

Cheers, James


Hi Ignacio, I think I have been replying to the topic instead of to your posts. Sorry about that, I’m new to this forum.

Have you looked at my previous post? I suspect that the way the scaling factor is applied for the ImageScience features is not quite right (voxel dimensions should be reset to the reciprocal of the scaling factors instead of just the scaling factors). I put more details into that post. What do you think?

Cheers, James


Hello again, @jlefevre and sorry for the late answer,

Please, be aware of the existence of two ImageScience classes, one helper class created in the Trainable Weka Segmentation (TWS) library, and one class from the original imagescience library. In the helper class we call the original methods from imagescience.

To reuse those original methods, I came up with the trick of using scale factors as calibration values. Have a look for example at the run method of the Laplacian class. The documentation of the scale parameter reads

scale The smoothing scale at which the required image derivatives are computed. The scale is equal to the standard deviation of the Gaussian kernel used for differentiation and must be larger than 0. In order to enforce physical isotropy, for each dimension, the scale is divided by the size of the image elements (aspect ratio) in that dimension.

So, thanks to the trick, I believe we are doing what you expected, aren’t we?


Hi @iarganda, yes I’m aware of the two classes. I appreciate the convenience of the trainableSegmentation.ImageScience helper class, particularly since it handles the type conversions between ImageScience proper and ImageJ. This is the class I was referrring to earlier.

I agree with your approach of rescaling the voxel dimensions in order to get the desired behaviour from the original ImageScience class. But I still think there is a flaw in the implementation, stemming from the inverse relationship between the scaling factor and voxel dimension.

Looking at your FeatureStack3D code that you linked above I believe it works as follows: You calculate a scaling factor for the y and z dimensions of the voxel, which is the x dimension divided by the y and z dimension respectively (the scaling factor for x is always 1). For the ImageJ filters, you then multiply the specified sigma by these scaling factors to get the sigma in pixels for each dimension, which I agree with. But this is not possible for the ImageScience Features, so instead you reset the voxel size to the scaling factors. Because of the inverse relationship between scaling factor and voxel dimension, the voxel size should instead be reset to the recipricols of the scaling factors.

I’ll go through an example in detail, so hopefully if I am confused you can point to the exact issue. Suppose that the voxel size is 1x1x2 and the specified sigma is 2. The sigma values that we want to apply is 2x2x1 voxels, since 1 voxel in the z direction is equivalent in real distance to 2 voxels in the x or y directions. Now, the scaling factors calculated in the FeatureStack3D code will be 1,1,0.5. The voxel size is reset accordingly then the appropriate ImageScience method is called. ImageScience will convert the sigma provided (which is 2) into real units according to these reset image properties. It calculates that the sigma in the x and y direction should be 2 voxels but the sigma in the z direction should be 4 voxels (since it believes the voxel is 0.5 units deep, it needs 4 voxels to equal 2 real units). So we end up with a 2x2x4 sigma instead of the 2x2x1 sigma we wanted. Does that make sense?