Need help understanding cooccurence numbers

glcm
co-occurence
imagej-ops
Tags: #<Tag:0x00007fd542dcda10> #<Tag:0x00007fd542dcd858> #<Tag:0x00007fd542dcd718>

#1

Thanks to the help I got last time around, I now have my first attempt at glcm.

ImagePlus imp = new ImagePlus("radio", stk1);
Img < T > img = ImagePlusAdapter.wrap(imp);
IterableInterval<T> it1 = (IterableInterval<T>) img;
if( suvp.ij == null) suvp.ij = new net.imagej.ImageJ();
glcm = suvp.ij.op().image().cooccurrenceMatrix( it1, COOCCURRENCE_HI+1, step, MatrixOrientation2D.HORIZONTAL);

Step is set to 1, nearest neighbor. I put a 5*5 sample in the brain, with the values normalized to max=10.
I got the following

pixels(5,5)
1 2 3 3 3
2 3 5 4 4
4 5 6 6 6
6 6 7 8 9
7 7 8 10 10
glcm step=1
  0    1    2    3    4    5    6    7    8    9   10
0.00 0.05 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00  0
0.00 0.00 0.10 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00  1
0.00 0.00 0.10 0.00 0.05 0.00 0.00 0.00 0.00 0.00 0.00  2
0.00 0.00 0.00 0.05 0.05 0.00 0.00 0.00 0.00 0.00 0.00  3
0.00 0.00 0.00 0.05 0.00 0.00 0.05 0.00 0.00 0.00 0.00  4
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00  5
0.00 0.00 0.00 0.00 0.00 0.00 0.15 0.05 0.00 0.00 0.00  6
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.05 0.10 0.00 0.00  7
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.05 0.05  8
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00  9
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.05  10

Many of the values make sense. There are 20 points so that 0.05=1, 0.10=2 and 0.15=3.
10,10 occurs once and that is what I see.
What I can’t understand is 0,1 which is supposed to occur once.
Since there is no zero in this particular sample, I would expect 0,x to be zero, for all x.

By chance I happened to start with step=2 and got

glcm step=2
  0    1    2     3     4     5    6     7     8     9    10
0.00 0.00 0.067 0.00  0.00  0.00 0.00  0.00  0.00  0.00  0.00  0
0.00 0.00 0.067 0.00  0.067 0.00 0.00  0.00  0.00  0.00  0.00  1
0.00 0.00 0.067 0.067 0.00  0.00 0.00  0.00  0.00  0.00  0.00  2
0.00 0.00 0.00  0.00  0.00  0.00 0.067 0.00  0.00  0.00  0.00  3
0.00 0.00 0.00  0.067 0.00  0.00 0.067 0.00  0.00  0.00  0.00  4
0.00 0.00 0.00  0.00  0.00  0.00 0.00  0.00  0.00  0.00  0.00  5
0.00 0.00 0.00  0.00  0.00  0.00 0.067 0.067 0.067 0.00  0.00  6
0.00 0.00 0.00  0.00  0.00  0.00 0.00  0.00  0.067 0.067 0.067 7
0.00 0.00 0.00  0.00  0.00  0.00 0.00  0.00  0.00  0.00  0.067 8
0.00 0.00 0.00  0.00  0.00  0.00 0.00  0.00  0.00  0.00  0.00  9
0.00 0.00 0.00  0.00  0.00  0.00 0.00  0.00  0.00  0.00  0.00  10

Here I noticed that 0,2 occured once which didn’t make sense, so I switched the step to one.
Can someone please enlighten me as what I am missing?

Thanks,
Ilan


#2

There is an additional problem which I don’t understand. On 10.x the above numbers make sense for the +x direction. I understood that HORIZONTAL includes both +x and -x. I looked at the constants for MatrixOrientation2D and they are HORIZONTAL, VERTICAL, DIAGONAL, and ANTIDIAGONAL. This is exactly what I expect if both positive and negative directions are included.
On the above data, in addition to 10,10, I would expect another contribution at 10,8 for -x. For step 2, I would expect 10,x = 0 for all +x but would expect to find contributions for 10,8 and 10,7 for -x.

In short, I clearly need help in understanding the numbers which appear.
Thanks again,
Ilan

P.S. Does anyone know where the source code for suvp.ij.op().image().cooccurrenceMatrix is located? It would be nice to get control of it inside a debugger.


#3

Thanks for the detailed post, I agree with you that the result doesn’t make sense (it looks fine though for distances of 3 and 4). I’d assume that this is a bug in the implementation of of the cooccurrenceMatrix op.

You can find the source in net.imagej.ops.image.cooccurrenceMatrix.CooccurrenceMatrix2D:

@dietzc might be able to help with more details.

I couldn’t find a unit test in ops that tests the results, it might be a good idea to add one to make sure the implementation is in agreement with others, such as the Texture Analyzer plugin


#4

I think the issue is in these lines:

where:

sx < pixels[sy].length

should be changed to:

sx < pixels[y].length

Do you agree? I’ll try to test it and submit a pull request when I find the time.


#5

Thanks for the reply. I didn’t know what to think.

My present guess is somehow a zero is getting into the matrix and it does a cooccurence on that zero.
I was just looking at

	while (cursor.hasNext()) {
			cursor.fwd();
			final int bin = (int) (((cursor.get().getRealDouble() - localMin) / diff) *
				(nrGreyLevels));
			pixels[cursor.getIntPosition(1) - minimumY][cursor.getIntPosition(0) -
				minimumX] = bin < nrGreyLevels - 1 ? bin : nrGreyLevels - 1;
}

and thought “what if there is a problem with the first pixel or the last?” Maybe the cusor.fwd() is jumping too soon? Maybe the last pixel is forgotten? Without a debugger, it is all guess work. My only clue seems to be that there appears to be exactly 1 zero in the matrix. (The nice thing about expecting zero events, is that 1 event jumps out at you.)


#6

The other part of the problem is probably more serious. I see the -x direction is missing. I would add something like

	for (int y = pixels.length-1; y >= 0; y--) {
			for (int x = pixels[y].length-1; x >= 0; x--) {
				// ignore pixels not in mask
				if (pixels[y][x] == Integer.MAX_VALUE) {
					continue;
				}

				// // get second pixel
				final int sx =  x - orientationAtX;
				final int sy =  y - orientationAtY;

				// second pixel in interval and mask
				if (sx >= 0 && sy >= 0 && sy < pixels.length
						&& sx < pixels[sy].length
						&& pixels[sy][sx] != Integer.MAX_VALUE) {
					output[pixels[y][x]][pixels[sy][sx]]++;
					nrPairs++;
				}

			}
	}

Finally I have an additional request: is it possible to the nrPairs to the outside world? Some utility function for it?

I would like the possibility to “undo” the renormalization which is automatically done. The nrPairs would do this for me.

If you do make a change, please, please, please give me a crack at the jar file or whatever, so I can test it. I want my sanity check on the very simple numbers I got from the sample brain study.

Thanks,
Ilan


#7

Regarding the unit tests: https://github.com/imagej/imagej-ops/blob/master/src/test/java/net/imagej/ops/features/haralick/HaralickFeatureTest.java.

We’ve spend a lot of time validating the implementation against other tools, e.g. matlab or CellProfiler. However, there will always be some slight differences, depending how someone treats edge cases etc.


#8

Do you agree with my statement:

or am I missing something?


#9

Hi @ilan,

I have to revert my statement above, the result is correct. I think the problem with your example the binning: you are using 11 grey levels (0 to 10) whereas your input only contains 10 (1 to 10). This way, each value will be re-binned and and your minimum intensity (1) will be in bin 0.
(If you look at your results, you’ll see that there are no values in bin 5, as value 5 => bin 4 and value 6 => bin 6.)

When testing this image:

and the following Groovy script:

#@ Img img
#@ Integer(value=2) distance 
#@ OpService ops
import net.imagej.ops.image.cooccurrenceMatrix.MatrixOrientation2D
result = ops.run("image.cooccurrenceMatrix", img, 10, distance, MatrixOrientation2D.HORIZONTAL)

I do get all the expected results:

distance = 1

distance = 2

distance = 3


#10

Hi Jan,
In almost all cases there will be tons of zeros. By chance I picked the brain which is hot.
Certainly not all values need to appear in the matrix and in this case, zero was missing.

I renormalized the data so the high value was hit and I wanted the range to be from zero.
Ilan


#11

So, what you’re missing is a way to tell the op which range of grey levels should be used (instead of normalizing to the actual image min and max), did I get that right?


#12

Yes, because it could be missing 0, 1 and 2. In 99% of the cases there will be tons of zeros.
What bothers me even more is the missing -x direction. From what I read HORIZONTAL is supposed to cover both +x and -x.


#13

You might want to create a new issue for this, so that it doesn’t get forgotten.

In order to find pairs, you only need to go one direction. The result is just a 2D histogram of co-occurrences, no need to count pairs twice.


#14

What you are describing is an Eastern direction.

https://prism.ucalgary.ca/bitstream/1880/51900/1/texture%20tutorial%20v%203_0%20170329.pdf

On page 17 there is
3. We want the GLCM to be
symmetrical around the diagonal
:
A symmetrical matrix means that the same values occur in cells on opposite sides of the
diagonal. For example, the value in cell 3,2 would be the same as the value in cell 2,3. The
matrix we calculated above is not symmetrical. However, texture calculations are best
performed on a symmetrical matrix.
The matrix above counted each reference pixel with the neighbour to its right (east). If counting
is done this way, using one directio
n only, then the number of times the combination 2,3 occurs
is not the same as the number of times the combination 3,2 occurs (for example 3 may be to
the right of 2 three times, but to the left of 2 only once). However, symmetry will be achieved if
each p
ixel pair is counted twice: once “forwards” and once “backwards” (interchanging
reference and neighbour pixels for the second count).
Example:
A reference pixel of 3 and its neighbour of 2 would contribute one
count to the matrix element 3,2 and one co
unt to the matrix element 2,3.
Symmetry also means that when considering an eastern (1,0) relation, a western (
-1,0) relation
is also counted. This could now be called a “
horizontal
” matrix.
Making a matrix symmetrical in this way also neatly gets us aro
und the problem of the window
edge pixels: remember the ones on the left could never be a neighbour pixel in an east
relationship, and the ones on the right could never be a reference pixel in an east relationship.
But in a horizontal relationship, making the symmetrical matrix, each pixel gets to be a
reference and neighbour pixel, no matter where it is in the window.
How to make the matrix symmetrical -
an easier way than double counting:
Any matrix can be made symmetrical by adding it to its
transpo
se
. The transpose is created by
interchanging the rows and columns of the original matrix


I had thought that since the word “horizontal” was used, it included both east and west. This is apparently not the case. So long as “horizontal” is accepted as “east”, I can live with that. This subject is new to me and I need to know what the accepted rules are.


#15

Hi Jan,
I would just like to thank you for your help. For me, the code was a “black box” where I had no idea what was really going on inside. I saw other code segments, but what was actually being used??
I have a region of interest over my data and want everything outside of it to be NAN. I knew MatLab could handle this, but I had no idea about the current code. I also didn’t know that the code would do the binning for me.

Now that I see the code, life is simpler. I simply initialize the pixels with Integer.MAX_VALUE, then overwrite the pixels inside the ROI with image data. No need to scale to the maximum bin value. Just let it fly.
HORIZONTAL doesn’t imply a symmetric matrix, but who cares? I also don’t need any corrections for “missing” low values. That is all taken care of by the program itself.

Without your taking the time to straighten out my newbie questions, I would have had next to an impossible time.
Thanks again,
Ilan

P.S. I see it isn’t quite as easy as it looked. In MatLab one has an entry into the interger pixel values themselves. Thus the NAN works. Here I may have to play games with the Iterable Interval. This part may be complicated. I’ll have to see what I can do.