Convert micro-CT to Cartesian from cylindrical coordinates

Tags: #<Tag:0x00007fa3032aa5f0>



I have about 1,800 images that were taken with the sample on a turntable using micro-CT. They are, as a result, rendered as cylindrical coordinates. I would like to reconstruct a 3D volume but believe I have to convert them to Cartesian coordinates before reconstructing so as to remove the spiral in the Z axis. I have attempted to use the Fiji plugins TomoJ and Beat Transform 2D 3D (which supports cylinder to Cartesian transform), but no success. The images are still cylindrically distorted.

Any guidance as to how to move forward would be very much appreciated.


Good day Andrew,

please post a typical raw image of a single layer in the original TIF- or PNG-format.
(No JPG-format though, because JPG introduces artifacts!)
You may also post images as Zip-archives.





Thank you for responding. Sadly, I do not have authority to publish any of these images to a public forum. I could, though, send an example image privately to anyone who has experience with this type of problem. If you do, I can be contacted at n00090007 at




Did you have a look at the ImageJ-plugin “Polar_Transformer.class” available from here:




Does the micro-CT device you are using not provide reconstruction software? Since conversion to Cartesian is a key step in CT, I would have thought the manufacturer (or service provider) would have also included a reconstruction program.


The images were taken (a while ago) using a borrowed Nikon XT H 225 device and Perkin Elmer 0820 panel. I have received the images for research purposes from the image owner who stated, “images could be reconstructed with VGStudioMAX’s CT Reconstruction Add-on Module.” (I don’t own VGStudio) Also, the instructions stated “it is important to choose the correct Centre of Rotation (CoR), so that an accurate alignment of the axis of rotation of the turntable with the centre of the flat panel detector is achieved.” So I am looking for an ImageJ plugin that accepts axis of rotation data as part of the input.

I have tried polar-transformer (as Herbie suggested), but it does not allow the CoR to be specified, at least not as offsets. I have the following reconstruction info:

— start of info —
At z=0, CoR offset = -14
At z=+30, CoR=-12.75
Use these as regsitered offsets.

Top offset = -10.76
Bottom offset = -17.24

900x900x375 @ 0.175mm
Z-range of reconstruction from -5.81 to +59.81mm rel. to images.
— end of info —

Any ideas or direction would be most helpful. Obviously, I am attempting something I have little experience with, but I’m very willing to experiment.



AFAIK “Polar_Transformer" allows one to define a center. Please study teh docs and experiment a bit…





Thanks, really, for the encouragement. I’ve looked over the docs and experimented with the (few) options. I also tried the non-linear polar transformer. No joy there either. One interesting thing, when looking at the sample converted images on polar plugin is that the images converted to polar are clearly distorted visually. My starting images, which are supposed to be in cylindrical coordinates, are visually normal. The object was imaged from the side using a turntable and, I think, the point of the coordinates being cylindrical is that they are describing how all images are revolving around an axis (Z) as the slices are captured.

So, I may be trying to convert an image from polar to Cartesian when, in fact, I need to covert the stack to a Cartesian volume. Does that make sense?

I’m sure it would help me, and I’d be happy to email you a sample image. I can be reached at n00090007 @




If I understand you correctly and now that you mention it, I think you might be right in concluding that this is not even the correct transformation. From the examples on the plugin page, it looks like it is meant to transform 2D images for which one of the axis is actually rotation angle. In other words, it is meant to go back and forth between 2D images; one represented as (r,theta) and the other as (x,y). However, if your data set is similar to what I deal with, you are looking at a stack of images (r, y, theta) and want to convert them to (x, y, z). Is that correct?

Also, do you understand all the stuff in the input file you posted. I think I have figured out most of it except for Top offset, Bottom offset, and Cut-off.



I believe that is correct (r, y, theta to x, y, z). When I look at the images, they are not distorted and look like the object from a side view. Each image is taken from a slightly different angle as the object spun on the turntable. My (limited) understanding of the info in the input file is this:

It appears that Z runs diagonally through the stack and that the top and bottom offsets indicate the “angle” of the axis. I find it interesting and probably a coincidence that adding the top and bottom offsets gives you -28mm and the Z-offset is +27mm. Again, probably just a coincidence.

I am at a loss, though, to explain the “Z-range of reconstruction of -5.81 to +59.81mm”

Your r, y, theta to x, y, z makes sense to me. How do you reconstruct your images? I’d very much like to give that a shot and experiment with the input data I have been provided.




Well, we don’t have any lab-scale CT equipment. The data I work with is collected using synchrotron x-ray CT at facilities such as national labs. The Advanced Photon Source at Argonne National Laboratory, in collaboration with others, has created a Python package called TomoPy which we use for our reconstructions. However, it is a bit hard to write your own script if you aren’t very familiar with the process. Furthermore, I don’t know if their package supports your data file layout. If it doesn’t you could try to find a package that does; but then you would need to make sure the data structure you use to pass the information to TomoPy aligns with what TomoPy is expecting.

However, I think you might have been on the right track with Beat Munch’s Xlib. You said you used Transform 2D 3D but what you actually want to use is one of the reconstruction plugins. In particular, I think you want to use Filtered Backprojection (though I am not 100% certain of that). Unfortunately, I am unfamiliar with all these plugins as well as with TomoJ; so I can’t really provide much help in using them and/or figuring out why TomoJ (for anyone searching for it: download page; SourceForge page) didn’t work.

This is what I think you input file is saying. As you travel along the z-axis, you CoR is changing. This is typically due to the rotation stage being unlevel. As such, you will need either to use a program that can work with a varying CoR or you will need to choose one and just deal with artifacts in the resulting image stack. The offset values themselves tell you what the CoR is relative to the center of the acquired images. In this case, your CoR is moving 1.25 pixels to the right (along the r-axis) for every 30 pixels moved up (along the y-axis) in the acquired images. As I already said, I am not sure about the next section. For the reconstruction section, it is telling you that the resulting image stack should be 375 slices; each 900x900 pixels. The voxel size (the physical length associated with each pixel) is 0.175 mm. The x, y, and z offsets I believe are relative to some reference. Knowing that each voxel is 0.175 mm and that the center of the image is considered the origin, we can calculate that your z-axis ranges from -32.8125 mm to +32.8125 mm. If we apply the offset of +27.0 mm, that gives us the z-range of -5.81 mm to +59.81 mm.



This is very helpful and I really appreciate your time. The summary you provided of the CoR fits exactly the info I have and I will attempt the transformation using Beat Munch’s Xlib. I will post back if this is successful. A colleague, who received the same data a few years ago, was able to reconstruct using VG Studio Max with these add-ins:

  1. CT Reconstruction Basic
  2. CT Reconstruction (Cone Beam, Fan Beam, Parallel Beam)
  3. Coordinate Measurement

Sadly, the price of VG Studio is out of range of our budget, hence searching for an ImageJ alternative.




If your colleague received the same data, could they not provide you with the reconstructed version?


They will be, but they were looking at other features (external) and I will be looking at internal features and would like to a) control contrast and b) ensure measurements are as accurate as possible in the reconstructed volume.

I’ve been experimenting with Beat>Filtered Backpropogation and am getting different (but not better) results. I’m betting either I don’t know what values to use (for sure) or that this reconstruction method won’t work with my data (possible).

Will keep trying but any guidance on this would be most beneficial.




Good day!

I am pretty sure now that your original request wasnot to the point.
As others have noted already you seem to look for the CT-reconstruction from sinogram-data!
Although ImageJ-plugins exist for this purpose, I am far from sure they will work in your case/data.




Hi @drewt,

From the sounds of it, you are getting closer. It is hard to guess what the issue is without having sample data, but I understand it is not possible for you to share it. If the reconstructed volumes look odd, it may be artifacts that require preprocessing the input data or pinning down the reconstruction parameters a bit better. From my experience, generating a “nice” tomogram required a few things.

Ensuring the original image series is flat field corrected (correcting so there is no intensity fall off as you move away from the middle.
Remove dead/bright pixels and reducing noise (lots of filters for this).
Very accurately determine the rotation axis (The TomoJ paper describes two ways to do this you may want to try first).
Perform the reconstruction with filtered back projection or an algebraic method.

Have you tried determining the center of rotation on your own and seeing how it compares with the metadata in the file you have or reconstructing with this estimated rotation axis instead? If I recall correctly, TomoJ draws the axis of rotation onto the image for you. When (if) it does this if you scroll through the acquisition angles, does it look like the values you enter generate the actual rotation axis?

If you haven’t already, you may want to check out some reviews discussing typical reconstruction artifacts so you can identify the source any further reconstruction issues you run into. There are quite a few, but here is one to get started that covers quite a few typical problems.

I hope this helps a bit!


:coffee: :cookie: :computer:


The script I use for mine also uses dark field corrections and additional pre and post processing to remove things like ring artifacts.