[Scripting-Groovy] parsing text file into ImagePlus

groovy
textfiles
Tags: #<Tag:0x00007fb87d43e738> #<Tag:0x00007fb87d43e5d0>

#1

Hello

I need to parse a set of files into ImageJ, here it is my groovy script:

// @File infile
// @int nrows
// @int ncols
// @int nslices

import ij.*
import ij.process.*

def ImagePlus imp = readfile() 
imp.show

def readfile() {
    def stack = new ImageStack(nrows, ncols)	
    def lines = infile.readLines()
    def pixels = []
    lines.each { line ->
    	line.tokenize() { items ->
            items.each { item ->
        	pixels.add(item as int)
           	if (pixels.length==nrows*ncols){
            	    stack.addSlice('',pixels as int[])
        	    pixels = []
                }
            } 
	}
    }
    return new ImagePlus(infile.filename, stack)
}

My console complains about

groovy.lang.MissingMethodException: No signature of method: java.lang.String.tokenize() is applicable for argument types: (Script19$_readfile_closure1_closure2) values: [Script19$_readfile_closure1_closure2@2a1f74d3]
Possible solutions: tokenize(), tokenize(), tokenize(java.lang.String), tokenize(java.lang.Character), tokenize(java.lang.CharSequence), tokenize(java.lang.Character)

What am I doing wrong??


#2

It seems the tokenize function has to be called outside the closure, final working version:

// @File infile
// @int nrows
// @int ncols

import ij.*
import ij.process.*

def ImagePlus imp = readfile() 
imp.show()

def readfile() {
    def stack = new ImageStack(nrows, ncols)	
    def lines = infile.readLines()
    def pixels = []
    def size = nrows*ncols
    lines.each { line ->
        def items = line.tokenize()
        items.each { item ->
            pixels.add(item as short)
            if (pixels.size()==size){
                stack.addSlice('', pixels as short[])
                pixels = []
            }
        }
    }
    return new ImagePlus(infile.getName(), stack)
}

#3

Thanks for sharing, @BishopWolf!

Yes, the tokenize() function returns a List and doesn’t take a closure argument itself. You can still chain all calls in a single line:

lines.each { line ->
    line.tokenize().each { item ->
        // your closure
    }
}

I also found this comparison of split() vs. tokenize() helpful.


#4

The problem with split is that consecutive separators, for instance double spaces, are seen as different elements, the tokenize puts every element as it is required, thanks for the single line call.