DeconvolutionLab2 filling up heap space

fiji
plugin
batch-processing
memory
deconvolution
Tags: #<Tag:0x00007fa2fff035a8> #<Tag:0x00007fa2fff03468> #<Tag:0x00007fa2fff03300> #<Tag:0x00007fa2fff031c0> #<Tag:0x00007fa2fff03058>

#1

Does anyone else get OutOfMemory errors when running multiple files sequentially in DeconvolutionLab2? I have looked through previous threads on troubleshooting memory errors but I haven’t found anything that seems to apply in my case (I’m taking out the garbage and have a large working memory available).

The problem:

I wrote a macro that runs DL2 on each z-stack in a folder, and waits until the it has finished processing each before starting the next run on the next z-stack. I call the garbage collector after each run of DL2, and I am running in batch mode. I can process about 5-10 z-stacks and then I get an OutOfMemory error. I have 14GB RAM allocated to ImageJ, which is more than enough for any given DL2 run (in the middle of the run it can use up to 10GB storing temporary images, FFTs, etc. I should say the z-stacks are quite large, dimensions 1024x1024x134).

When I look at JVisualVM profiler I see that something is not getting cleared by garbage collection and the plugin’s internal memory management. Below is a screenshot of the heapspace for ImageJ. You can see the rise and fall of each run of DL2, but each time something is leftover in the heap space.

I’ve tried taking a heap dump in between to see what is left over. It seems that after each round of DL2, an extra image object of about 568MB is left over in the Fiji heap, until these leftover images crash ImageJ. It seems like this amount of memory corresponds precisely to a 32-bit copy of the z-stack (1024x1024x134) that is getting saved and not de-referenced for garbage collection. However, this copy accumulates even if I don’t specify any outputs from the DL2 run.

Has anyone else had an issue like this? I have a lot of 4D datasets I want to process for deconvolution and if I have to keep restarting ImageJ this is going to put a huge cramp in my workflow!

I’m happy to provide a copy of my macro, though I think that the problem is in the plugin and not my macro.


#2

Hi @akennard

Are you using the FFTW library with DeconvolutionLab2?? If so it’s possible a native buffer isn’t being freed. It’s also possible that the plugin is keeping a reference to the output, even if it is not displaying it. I’m not overly familiar with the code so I don’t know for sure. If you are able to contact the original developers, they would be able to likely give you a better idea of what is happening, and how to fix it.

That being said, I’d be happy to troubleshoot if you are able to provide a representative image, PSF, and the script you are using. I’ve been meaning to dive into the DeconvolutionLab2 code anyway.

For batch jobs and/or scripts another long term option is to use imagej-ops. I say “long term” because ops-deconvolution and ops itself, is still in beta, and requires on going testing and tinkering. There are several scripts showing the use of imagej-ops deconvolution here. (These scripts have parameters (like optics settings/border sizes etc.) that are tuned for specific images, and may need tinkering to work on different images)

(btw Ops is meant to be complimentary to projects like DeconvolutionLab2. As ops matures ideally people would write wrappers for the algorithms in DeconvolutionLab2. Already some of the deconvolution related code wraps other projects like Jizhou Li’s Theoretical PSF Generation.

My own goal is to be able to write workflow’s that use deconvolution as part of a larger image processing pipeline, and quantify the effect of the deconvolution on the end result. For example it is no use using the fanciest deconvolution algorithm, if it is not having a significant effect on the end measurements. I’ve seen cases (especially in confocal imaging) where relatively simple sharpening or Wiener filters had a significant positive effect on research, and there was no need for long running iterative filters). I’ve also seen cases (Widefield with significant aberrations and extended objects) where you needed a complex deconvolution workflow (measure and preprocess PSF, extend the images properly, use an iterative regularized algorithm) to get good results.)


#3

Thanks so much for your help @bnorthan !

That’s a good question and I am not sure. I have zero experience working with Java so I can see from inspecting the .jar and the code on Github that the FFTW library is part of the code, as well as other packages like AcademicFFT, and I am not sure in what runtime conditions each is being used. I was planning on doing due diligence on this forum before contacting the developers, but it sounds like this problem might be in the weeds enough that the developers should be in the loop; I’ll be sure to contact them.

If this offer is still open, thank you so much! I’d be happy to send you all that stuff, maybe a Dropbox link would be best because the z-stack itself is rather large (282MB). Let me know the best way to proceed.

I had not heard of imagej-ops, but I agree that having a more modular way to call big programs like DL2 would be very useful; as it stands my code was super quick-and-dirty just testing out the batch processing, and something more extensible would be very useful. I may give those scripts you mentioned a try!

This point is also well-taken; in my case it may well be that all I need is sharpening in the z-direction. What I really want to do is measure the height in the z-direction reliably, and I figured that deconvolution would be important to remove any confounding blur. Though this is data from a (spinning-disk) confocal so the blur isn’t too extreme. It’d be nice to know quantitatively how much extra I’d get from full deconvolution and whether it’s worth the computational time/cost. Very curious to see how this project evolves for you!


#4

Hi @akennard

How have you been generating the PSF?? Do you have a bead image??

I’d be happy to take a look at the images and run them through DeconvolutionLab and ops. You can post the link to dropbox here if you are comfortable sending it publicly, or send it to me at bnorthan@gmail.com. To do deconvolution. For Spinning disc it would be optimal to have a bead image too. If you don’t have a bead image, let me know how you are determining the PSF?? There are theoretical PSF generators that work well for Widefiled, and for confocal squaring the widefield PSF can approximated by squaring the widefield PSF. However computing a spinning disc PSF theoretically is harder.

Brian


#5

Great, I’ll send it over via email later today! Thanks again :slight_smile:

I have a bead-generated PSF; a postdoc in our lab has been working on getting good PSFs for all our microscopes; he averages PSFs from about a hundred sub-resolution beads at very fine z-spacing, and then I interpolate to the z-spacing of my data. Only caveat is the beads are right next to the coverslip, and since this is imaging of an embryo I am sure there is some depth-dependent aberration that is not accounted for. I am using a water immersion lens to minimize that as much as possible but a future step is going to be embedding beads at different z and investigating the depth dependence of the PSF.


#6

Hi @akennard

That sounds very interesting.

Embedding beads is a good idea. It would be interesting to see the practical differences between a result deconvolved using the (presumably incorrect) PSF at the coverslip vs. using an aberrated PSF from the middle of the image.

Feel free to send me the images when you get a chance

Thanks


#7

Sorry about the long delay. Like a lot of us do, I’ve ended up getting distracted by a bunch of different stuff.

I ended up processing the images you sent with the Richardson Lucy algorithm. By the way are these images publicly shareable?? If not I can contact you offlist about the details.

I did not notice a memory leak while processing the images, but honestly, I only did two deconvolutions, and did not keep careful track of memory. I still plan to look at the memory issue in the near future. Unless you had it solved already by the original developer and got it fixed. Let me know if that’s the case.

@hadim has branched deconvolution lab and made a GPU version (https://github.com/hadim/DeconvolutionLab2). If you haven’t gotten the memory leak fixed already and have a good GPU, it might be a way to bypass the memory leak (plus get faster results).


#8

Hi Brian,

Thanks for your help! I have not fixed the memory leak yet. I might try to reproduce it on a colleague’s machine as well, in case it might just be something funny with my Java installation.

I do have some access to a good GPU, so I may want to check out the GPU implementation you mentioned! I will talk to my colleague in charge of GPU access to see if that might work as well!


#9

A while back @akennard kindly sent me his script for the sequential deconvolution of all stack files in a folder. After implementing a few minor changes to optimise it for my needs I deployed it on a set of 34 identically-sized data files. Sadly I can say that I’ve probably run in the same out of memory issue that he reports. Essentially, the memory used by Deconvolutionlab2 increases at every loop by about the size of the input stack, until the limit is reached:

Exception in thread “Thread-68” java.lang.OutOfMemoryError: Java heap space

In my case, with 20GB of RAM assigned to Fiji and 353MB input files, the issue popped up at file #30.

I can certainly work around the problem by feeding my data in batches of 29 files, but would like to inform the developers of DeconvolutionLab2 (I’m assuming that @akennard hasn’t already contacted them).

Cheers, L

Update: I can confirm that I’ve sent an email to Daniel Sage reporting the issue and linking this thread.


#10

Update on this bug: @daniel.sage and colleagues just released an update of DeconvolutionLab2 (version 2.1.1) and I’m happy to report that the memory leak issue is resolved! DeconvolutionLab2 can now be run iteratively on an unlimited number of stack files in a folder without running out of memory.