Great Overlaid Rigid Rod Segmentation Challenge


Hello dlegland,

Orientation-filtering isn’t easy and generally requires, as you write, post-processing. In fact, I’ve thought about orientation filtering first but found my slightly related approach more robust. (As far as I understand Tony Gozdz, length analysis of curved rods is not required.)

BTW, chord transformation dates back to at least 1972 (Moore 1972 ). However, Sandau & Ohser (2007) don’t refer to the work done in the 1970s and 1980s.




Hello George,

well, I have’n said this was the best approach, but just a possibility to try!

Anyway thank for the reference, I did not know about it.


This is also a work-in-progress in CellProfiler- you can see the PR for the IdentifyLinearObjects module here . I quickly put together a segmentation for your rods- with >5 minutes of tweaking you could probably improve it even more. Second image has measured lengths and widths.


Just wanted to thank everybody participating in this thread for your interest, time and valuable suggestions; for the moment, I’ll stay at the sidelines as I can’t really contribute any technical insights, just judge the practicality and accuracy of the proposed method from a point of view of an non-expert user. I’d like to add that I have recently tried CellProfiler as well–and even contacted one of the co-authors, but I was informed that the program has not been designed and fine-tuned to resolve and measure such linear and low-L/d objects. So, I’m impressed with @bethac07’s example! However, can I realistically expect the program to deal efficiently with 500-2000 objects, such as the example mentioned below?

If I may make one suggestion to those posters who solved the simplistic examples with 7 lines at the top of this thread, is to try to apply the proposed methodology to a real-life situation perfectly reflected in the top posting (greenish-background image) posted at; this is what I and others in the composite materials field have to deal with.

Thanks again for your ideas–I’ll keep following your interesting discussion!


Dear Tony Gozdz,

that’s a bit unfair, because the real-world image contains joints or crossings of more than two rods. Hence the test example was not representative. Furthermore, you’ve stated earlier that the width of the rods is constant which is not the case in the real-world image. Last but not least, the real-world image is badly resolved spatially.

In short I’m out. No fun!



Hi @George, sorry that life is unfair! :wink: Anyway, multiple-crossing groups are rare and possibly could be minimized by diluting the suspension, so it’s not a deal breaker. On your second point, the width is constant to within 0.1 um; it’s the images that are not perfect. In real life, there are always some tolerances. Finally, there are good, so-so and bad images. Here’s a better one (are you still game?)

Measuring line length using ridge detection

Dear Tony Gozdz,

the spatial resolution of the real sample image is bad compared to that of the test image.

I’ve started with a 400x400 excerpt taken from the top left.
Here are the results:

281 ROIs were considered.

Analysis within ROIs was limited to two rods.

Total ImageJ-macro processing time was about 11.5 seconds on the fastest current iMac.




@george, thanks! What do you mean by stating that the “Analysis within ROIs was limited to two rods?” It’s hard to tell from the attached images, but does your procedure resolve touching and crossing objects as individual full-length fiber fragments? I can’t tell…

Re the poor resolution, this is a real-life image; the first example was a synthetic image drawn by me…


some questions about this challenge:

  1. how many false positives can you accept?
  2. how many false negatives can you accept?
  3. how much is it the reward for this jo… oops challenge?:wink:

have a nice day


Hello @TG3 and sorry for the late answer,

My feeling is the sigma can be used to define how close two detected junctions in the same object might be considered the same. I’ll try to play with your new image as soon as I have some time.



Dear Tony Gozdz,

as mentioned previously, i.e. here

If more han two rods overlap, the problem becomes more involved.

and here

[…] real-world image contains joints or crossings of more than two rods.

any analysis based on “Ultimate Points” needs more effort, if more than two rods are to be analyzed, that I’m not willing to spend. Consequently, I’ve limited the analysis to two rods per ROI and of course this limitation will introduce a certain bias. The restriction can easily be recognized from the posted results: There are at most two images with the same ROI (number). In other words, the result consists of 336 images (isolated rods of measured lengths) but only 281 ROIs which, with much involved maths, tells us that in 55 of the 281 ROIs two rods were analyzed.

The spatial resolution of the rods is poor and perhaps the absolute minimum possible. The resolution of the test image is at least 5 times higher. And yes, this makes a big difference for an approach that was conceptualized with respect to the test data! It is not life that is unfair but people who are careless.

It should be no problem to acquire real world images with much higher spatial resolution.




@emartini, how do you define ‘false positives’–as two or more objects that are crossed/touching, but are treated as one? I think these could be eliminated (if not resolved) by specifying a low value for the d/L ratio (minFeret/Feret dia <0.2 or so.), so I’m not worried about them. It is my understanding, however, that longer fibers will have a higher probability of crossing other fragments than shorter fragments would have, thus biasing the result towards the shorter objects. @Benoit’s script ( clearly shows such objects, but does not resolve them, and my worry about the bias seems to be borne out.

I’m less concerned with the false negatives as one has to set a lower length limit anyway to filter out dust and dirt.

Re the reward: I thought you’re in research for sheer pleasure, not for a living! :wink:


@Herbie a.k.a. George, I’m scouring the web and this forum for the 200-line macro to give it a spin–and can’t find it! :wink:


I solved the problem by sweeping the FFT to segment the lines out via a macro.

This was the work flow:

  1. Create a bar mask to filter the FFT for line-like objects

  2. Create a 180 slice stack of the mask being rotated in 1 degree increments.

  3. Use the mask to crop a single band from the FFT of the image in 1 degree increments.

  4. Take the inverse FFT to get the image where any lines aligned orthogonal to the angle will be suppressed.

  5. Normalize the stack intensity, and then threshold the stack such that suppressed lines are removed.

  6. Extract the lines in 3D “angle space” by: Invert the stack (so lines are white), create a maximum intensity projection, use image calculator to generate a difference stack between the invert and the maximum intensity projection (only suppressed lines at given angles left), and use a 1x1 median to clean up artifacts.

  7. Now the lines are nicely segmented in 3D angle space. This means you can run you algorithm of choice to segment out the 3D objects (such as classic watershed or 3D objects counter).

  8. With the segmented stack, each line object will be one of the lines from the image. To analyze them individually, simply threshold out the line of interest, or take a MIP of the whole stack to get back a 2D image with the segmented lines.

The advantage to this method is that it is fairly fast, is specific for straight lines, and the resolution of the process can be adjusted to trade-off speed vs. accuracy.

Thanks for the game, it was pretty fun!

How to realize the continuous measurement of ImageJ?

@Llamero, this is a pretty nice solution, esp. that it is specific to deal with rigid rod assemblies!

A general question: can the entire sequence by automated with a macro, with no adjustable parameters whatsoever, using non-ideal images like those presented earlier in this thread? As a total image processing novice (with no immediate plans or need to dig very deep into the field), I’m familiar with only a very limited set of functions and macros in Fiji and have no feel for the extent of fiddling necessary to deal with such images.


The analysis was done entirely using the core FIJI functions (as long as you use the 3D Object counter for the segmentation rather than the classic watershed).

I just drew the mask by hand. It’s simply an 8 pixel wide rectangle centered on the origin, with a 2x2 square at the origin to preserve the DC compenent (i.e keep the image birghtfield).

I then made a new image to store the rotations of the mask and ran the following code to create the rotation stack:

for(a=1; a<=180; a++){
	selectWindow("FFT Mask.tif");
	run("Duplicate...", "title=1");
	run("Rotate... ", "angle=" + a + " grid=1 interpolation=None");
	run("Select All");
	run("Select None");
	selectWindow("FFT Mask stack");
	run("Select None");	

I then made another stack to store the filtered images, and then ran the following code to crop the mask from the FFT of the original image in 1 degree increments and then store the inverse FFT:

for(a=1; a<=180; a++){
	selectWindow("FFT Mask stack.tif");
	run("Select None");
	run("Create Selection");
	selectWindow("FFT of Clipboard.tif");
	run("Restore Selection");
	run("Clear Outside");
	run("Select None");
	run("Inverse FFT");
	close("FFT of Clipboard.tif");
	selectWindow("Inverse FFT of Clipboard.tif");
	run("Select All");
	selectWindow("Filtered stack");
	run("Select None");
	close("Inverse FFT of Clipboard.tif");
setBatchMode("exit and display");

The FFT filtering made the slices vary somewhat in intensity, so I normalized the intensity of each slice using the median intensity:

median = newArray(180);
for(a=1; a<=180; a++){
	selectWindow("Filtered stack.tif");
	median[a-1] = List.get("Median");

Array.getStatistics(dummy, dummy, dummy, mean, dummy);

for(a=1; a<=180; a++){
	selectWindow("Filtered stack.tif");
	run("Multiply...", "value=" + mean/median[a-1] + " slice");

The last steps described in the original method were done by hand, but they could easily be recorded and implemented as a macro.

The two tuneable parameters are the width of the mask, and the threshold. The thinner the mask, the filter will become more stringent towards thinner lines. It may be possible to use an autothreshold to automatically threshold the image, but I found it much easier to just do by eye, by moving the threshold until suppressed lines at the corresponding angle were completely removed. The threshold holds true for all angles if the stack has been normalized (See above), so you simply set the threshold for the first slice, and then apply it to the whole stack.

To measure length and angle, I would use the results table from the 3D objects counter to crop the stack to just the bounding box that contains the line of interest. Then I would use
process->Math->Macro to get the line I want based on intensity (i.e. its numerical identifier from the segmentation). I would then create a maximum intensity projection, and then use the particle analyzer to get whatever parameters I was interested in about the line.

Hope this helps,
Ben Smith


One side note, judging by the similarity in appearance between my solution and the CellProfiler output, I’m wondering if they use a similar FFT filtering approach.


@Llamero, great thanks for a detailed explanation and sequence. I have to admit, though, that I feel a bit overwhelmed. :blush:


It may be easier to just broadly state the problem and solution. The problem is that the lines overlap is 2D space. This means that a given pixel in an image is not necessarily unique to a single line.

The solution is to add a third dimension, where every voxel in the 3D space will, by definition, become unique to a single line.

In the solution I proposed, the third dimension I chose to add was directionality (the angle at which the line was pointing). This is because if two lines occupy the same XY space and are going in the same direction, then you actually have just one line by definition of the problem (i.e. two perfectly super imposed lines are indistinguishable from a single line).

Therefore, if two lines intersect, the property that makes each line unique is their direction. The FFT filter is simply a way to quickly filter all lines based on their orientation, independent of their position in the image (since there is no positional information in an amplitude FFT).

Therefore, we now have replaced our (X,Y) pixels with (X,Y, angle) voxels, where we know that by definition, all lines will occupy a unique position within this 3D space:

Click here for a movie of the 3D space.

This is how most segregation algorithms work, where by adding another dimension you get better segregation (such as going from an SDS-PAGE gel to a 2D gel, or from a maximum intensity projection to a 3D confocal image). The key is to simply choose an extra dimension that is highly informative and will give the most effective segregation.

You can also see that one artifact is that lines that hit the edge of the angle space loop back to the other side of the stack. This is because 180 degrees = 0 degrees, and as such polar coordinates do not project well into Cartesian space. The solution to this is to know that lines can wrap from one end of the angle space to the other, creating an effect that behaves similarly to a periodic boundary condition.


That part and your reasoning were perfectly clear, and the movie just polished it off; it’s the exact multistep sequence of operations in Fiji/ImageJ that is the cause of some heartburn. It’s not too surprising as one has to live and breathe such problems everyday to acquire a certain comfort. Thanks again!