Polygon/Mesh boundary

discrete
roi
Tags: #<Tag:0x00007fd6a1d12a70> #<Tag:0x00007fd6a1d10f18>

#1

There are different possibilities on how to set the boundary of a polygon/mesh if it is extracted from a ROI.

In the following pictures we have a BitType-Mask with white foreground and black background pixels. The orange squares are the discrete pixel coordinates. The blue line is the boundary of the polygon/mesh.

  1. Option:

    The boundary cuts through the pixel coordinates of the white pixels (discreet polygon/mesh coordinates).

  2. Option:

    The boundary is in between the white and black pixels (real polygon/mesh coordinates).

  3. Option:

    The boundary cuts through the pixel coordinates of the black pixels (discreet polygon/mesh coordinates).

Right now the polygon and mesh extractors in ops implement option 1.

I think that option 2 is more accurate if we consider that the pixel representation is a discretization of the real world.

  • 1 - Option
  • 2 - Option
  • 3 - Option

0 voters


#2

I think that option 2 is more accurate if we consider that the pixel representation is a discretization of the real world.

But is it? or is it a set of samples of the real world? If the latter is true, then option 2 introduces artifacts because it is interpolating the space between as a rectangular grid.


#3

@awalter17 and I discussed iterating ROI boundaries recently, and this is really the same problem.

One thing I’d like to see is that if you go from mask to polygon, and then back again, you end up with the same mask. There can be weirdness with samples that lie directly on the boundary of a polygon—I believe that in the current code, when it comes to rectangles at least, @awalter17 said that the bottom left sample will get included, but others won’t. So with both option 1 and option 3 above, my criterion of producing the same mask after transforming back will not hold. Only option 2 makes that work as expected.

That said, I’m not necessarily endorsing option 2. I think we could alternately fix the situation by changing how borders are treated inclusion/exclusion-wise with ROIs etc.


#4

I was also going to suggest Option 2 for @awalter17 ROI discussion. I remembered that there was some discussion along those lines recently, and I finally found it here https://github.com/imagej/imagej-ops/issues/439#issuecomment-242709471

I agree with @ctrueden that “if you go from mask to polygon, and then back again, you end up with the same mask” is a desirable property.


#5

I agree with @ctrueden that “if you go from mask to polygon, and then back again, you end up with the same mask” is a desirable property.

I do agree with this as well, but is that mutually exclusive with the other options (specifically option 1)? Isn’t recovering the mask basically the same between 1 and 2, except in 1 you also fill pixels along the boundary? On the other hand, option 2 involves messing with real coordinates, for what could otherwise be a completely discrete coordinate system.


#6

All those are just 4-connected boundaries… If you look in the literature you will find that 8-connected are often closer to reality. Also overestimation of boundaries is larger with 4-connected.
There are tons of work on this area (from the 1960s) including the errors associated with the various types of boundary encoding (and there are many more). Best to read a bit than trying to reinvent the wheel.


Implementation plan for Imglib2-rois 2D
#7

So… these papers?

  • On the encoding of arbitrary geometric configurations
  • On the quantization of line-drawing data

? Any others you’d specifically recommend?


#8

Yes, those are good and there are many more dealing with these issues. E.g.

D Rosen
A Note on the Measurements of Quantized Areas and Boundaries
University of Maryland, Computer Science Center (November 1978) TR-713

Rosen D. On the areas and boundaries of quantized objects
Computer Graphics and Image Processing
Volume 13, Issue 1, May 1980, Pages 94-98

Kulpa Z
Area and perimeter measurement of blobs in discrete binary pictures
Computer Graphics Image Processing, 6 (1977), pp. 434–451

P.V Sankar, E.V Krishnamurthy
On the compactness of subsets of digital pictures
Computer Graphics Image Processing, 8 (1978), pp. 136–143

P.V Sankar
Grid intersect quantization schemes for solid object digitization
Computer Graphics Image Processing, 8 (1978), pp. 25–42

A Rosenfeld
Compact figures in digital pictures
IEEE Trans. Systems Man. Cybernet., SMC-4 (1974), pp. 221–223

T.J Ellis, D Proffitt, D Rosen, W Rutkowski
Measurement of the lenths of digitized curved lines
Computer Graphics Image Processing, 10 (1979), pp. 333–347

plus the papers dealing with Bresenham circle and line algorithms.

My take is that it is best to use 8 neighbours connectivity (smallest error without having to resort to some length and area correction factors that might not always apply) for the foreground and centre the ROIs on the pixels, not in the top left corner as traditionally IJ does as this tends to confuse what one is measuring. (No, I am not suggesting to change how IJ does it now, but it is useful to think it that way, Particles8 and Lines8 are based on this to compute areas and lengths).
My other suggestion is when dealing with areas, count both the the enclosed polygon area (via the Green/Pick method) and the number of pixels in the object (what IJ does, but this overestimates area). The two will not be the same, and provide different information.
This was briefly discussed in the github link above.


#9

@gabriel that’s my fault. The extract contour implementation in ops is 8-connected.

Here the extracted polygon I get with ops:

As far as I understand it, the boundary location problem is still the same.