RGB conversion, large files, and memory problems

Tags: #<Tag:0x00007fb883a529c0> #<Tag:0x00007fb883a52718> #<Tag:0x00007fb883a52268>


Howdy Folks!

I have been working on this for quite some time, and could use some advice. I am importing large whole slide images into ImageJ via Bio-formats. (from .ndpi which I believe uses some sort of JPEG compression). The import goes well and produces 8-bit hyperstack of three channels. When I use the channels tools/more/convert to RGB I get an odd image mashup similar to this unanswered topic: “Bug in “Merge Channels” with large files”. It seems to happen as file size approaches 2GB. the same happens when I import and export into .tiff. My goal is to be able to use color deconvolution and/or color threshold to determine area covered by a stain. I have many images and hope to be able to put in batch process.

A further complication is that these processes seem to use memory and then fail to release when images are closed. Only exiting imageJ works to release the memory. I have tried clicking on bar and using the “Collect garbage” plugin. I have 32GB of memory, so only a few images are processed before the the memory maxes, I would guess starts trying to use virtual memory, and the computer locks. Any advice would be great. Is there a better way to import or save/store the data for this application?

Should I have posted this in image analysis?

Thank you kindly for your time,


QuPath was designed for this kind of whole slide data: http://qupath.github.io

Using QuPath (+ either OpenSlide or Bio-Formats) you may be able to analyze the image directly with the built-in tools. If not, you can draw an annotation around all or part of the image and send it to ImageJ at whatever resolution you need - https://github.com/qupath/qupath/wiki/Working-with-ImageJ

The image in that case should be (packed) RGB from the start with no separate conversion from hyperstack necessary. Then if you like you can save it as an ImageJ TIFF and reopen it in your own Fiji installation.

If you do need this process regularly it could be scripted in QuPath.



Thanks for the pointers. I checked out QuPath. Slick interface for looking at large files, but had file size restrictions for the export formats including to ImageJ.


Since an 8-bit stack seems to display properly, perhaps there is an issue with conversion to RGB stack or RGB ‘flattened’ for large files?


Oops, I forgot about the export limit, sorry. It’s a pretty arbitrary one that seemed a good idea at the time, and was introduced to guard against accidentally exporting infeasibly large images and encountering slow performance or memory errors. My intention in QuPath was that images should always be processed in smaller chunks - by resizing, cropping or both*.

However here is a script to run through QuPath and extract the image directly, circumventing the limit:

However, this reminds me that the actual maximum size of an RGB image will be limited by the length of a Java array - at least if you are restricted to ImageJ1 structures. So if your intention is to have a full-resolution, large .ndpi file as a single ImageJ image then that could be very tricky. There is more info at https://imagej.net/Frequently_Asked_Questions#What_is_the_largest_size_image_that_ImageJ_can_open.3F

*-Actually @oburri has worked on giving a better limit than the default. His changes are included on my fork of QuPath here. This is where I am currently playing around with other new things that have not made it into the ‘main’ version yet, and may be able to add in other changes that could help with your application.


I’ve just re-read your first post…

You can certainly do color deconvolution in QuPath, and may be able to apply it to get your stained areas directly - although this is not (yet) quite as easy or as generalized as it should be through the user interface. But it can certainly be scripted.

In general I would expect that you really don’t need to run the analysis at the full resolution when measuring areas in a whole slide image, and that downsampling helps performance so enormously that it is certainly worth it. There may be a small loss in precision, but in cases I have seen this would have a negligible impact on the overall results.

Whenever you extract an image to ImageJ in QuPath, QuPath will automatically update the pixel sizes (and origin) of the image so that measured areas should match the corresponding region in the whole slide image in µm^2 or mm^2. This also means that if you set a threshold and generate a ROI within ImageJ, QuPath is able to perform the transformations necessary to bring that ROI back as an annotation placed on top of the whole slide image. This allows you to see how well the region you detected/measured in ImageJ at low resolution matches the correct areas in the full image.



I appreciate your time. And thank you for providing the script to work around the image size limit on export. I am currently using latest Fiji version, which I believe is based on ImageJ2. It is clear in some cases there is difficulty managing large 2D arrays. I am doing a pass now with downsampled images and ROIs, agreeing with your suggestion. In the future I will try to push for higher res image handling for sake of texture and boundary analysis. Can QuPath do color thresholding?



Yes, but not in quite the same way as ImageJ.

There is at least one direct method of color thresholding (in a sense). Some QuPath commands are designed firstly for use with hematoxylin and DAB staining, since that’s what I was mostly working with, and one of them was created specifically with the use of a cytokeratin in mind. But it is/should be more general than that. It allows you to apply thresholding to the color-deconvolved DAB channel and create regions automatically using that. Potentially you can call anything DAB to trick the command into working on other stains.

Here’s a sample script, which was generated mostly automatically based on commands being logged under the ‘Workflow’ tab:

// Set the image type
// Potentially change the stains (e.g. with 'Estimate stain vectors', or selecting regions & double-clicking on the stain under the 'Image' tab)
setColorDeconvolutionStains('{"Name" : "H-DAB default", "Stain 1" : "Hematoxylin", "Values 1" : "0.65111 0.70119 0.29049 ", "Stain 2" : "DAB", "Values 2" : "0.26917 0.56824 0.77759 ", "Background" : " 255 255 255 "}');
// Create annotation around the full image
// (it's better to reduce this if you can!)
// Create annotation from thresholding the color-deconvolved DAB image
runPlugin('qupath.opencv.DetectCytokeratinCV', '{"downsampleFactor": 20,  "gaussianSigmaMicrons": 20.0,  "thresholdTissue": -1.0,  "thresholdDAB": 0.25,  "separationDistanceMicrons": 0.0}');

In this case the downsampling is pretty brutal; performance can become an issue if you have a single object with a really massive number of vertices. So you could try gradually adjusting the parameters to get a balance between accuracy and… it working at all in any reasonable timeframe.

Alternative approaches include:

  1. You can use the ImageJ macro runner within QuPath - but again you’ll need to downsample quite heavily if you want to apply it directly to the entire image, rather cropped regions tiles.

  2. You could generate ‘superpixels’ in QuPath, add features to these (e.g. color), and then interactively train a classifier to identify different classes of superpixels. To get your final area result you can either then add up the areas of each superpixel according to its classification, or optionally merge superpixels with the same classification into new annotations (from which you can read off the area directly).

The superpixel method gives you the extra option of training the classification across multiple images, and using feature combinations or texture. It also means you can use higher resolution information, because superpixels can be generated on smaller image tiles and don’t require access to the full resolution image all in one go.

I also have some experimental code for another approach that may be preferable to all of these, but it’s not yet in a very usable state… however I hope to be able to make it available for testing at least in the next few weeks.


I started a blog recently to describe and discuss a bit QuPath, where it’s going, and how to make the most of it.

Anyhow, I’ve just posted a script & explanation showing how to combine QuPath with ImageJ via Groovy scripting to directly identify regions using color deconvolution & thresholding, and immediately bring the result back into QuPath for visualization on top of the whole slide:

If you don’t mind working with scripts, this general approach gives a lot of flexibility to customize the analysis of whole slide images beyond this specific example, or to transfer existing analysis approaches developed using ImageJ to work on whole slide images.


Thanks Pete, I’ll check out the blog!