How to have a batch of images with the same brightness and contrast?


I have a set of images of Petri dishes which have different brightness and contrast. I need them all to have the same brightness and contrast. I have tried “Enhance contrast” and that did not work. I opened all pictures in a stack and applied contrast and brightness and that changed them in a way that the differences did not decrease. Can anyone please help me with it?

Matching brightness of images
Segmentation and thresholding

Hello @Siavash and welcome to the forum!

You might want to apply histogram matching so all your images have more or less the histogram. In Fiji, you can use a script like this one:

// @ImagePlus(label="Image to transform") imp1
// @ImagePlus(label="Reference image") imp2

import ij.IJ;
import histogram2.HistogramMatcher;

ip1 = imp1.getProcessor();
ip2 = imp2.getProcessor();

hist1 = ip1.getHistogram();
hist2 = ip2.getHistogram();

matcher = new HistogramMatcher();
newHist = matcher.matchHistograms(hist1, hist2);


// show the histograms of both images, "Histogram", "");, "Histogram", "");

(Original script from Jan Eglinger.)

Image Segmentation - Cleaning up
bUnwarpJ registration

Thanks for your reply. The problem is I have too many images. Can you tell me how I can modify one to my desired settings and then match a whole stack with it or open a stack and then run a script so all have the same histogram settings then I can change them all together too my desired brightness/contrast?
Also what is the ij.IJ script? It shows an error when I run it. Do I need to install a plugin for it?


What I posted above is a Beanshell script. In Fiji you can run it from the Script Editor, for example.

If you have too many images, try to open them as a stack and use this other script:

// @ImagePlus(label="Stack of images to normalize") imp
// @Integer(label="Reference slice",value=1) referenceSlice
// @OUTPUT ImagePlus result

import ij.IJ;
import ij.ImageStack;
import ij.ImagePlus;
import histogram2.HistogramMatcher;

if( referenceSlice < 1 || referenceSlice > imp.getImageStackSize() )
	IJ.error( "Error: wrong selection of reference slice" );
// Create result stack
resStack = new ImageStack( imp.getWidth(), imp.getHeight() );
// Create a histogram matcher for all slices
matcher = new HistogramMatcher();
// Read reference slice histogram
ip2 = imp.getStack().getProcessor( referenceSlice );
hist2 = ip2.getHistogram();
// Match the histogram of each slice to the reference one
for( slice=1; slice <= imp.getImageStackSize(); slice++ )
	ip1 = imp.getStack().getProcessor( slice );
	if( slice != referenceSlice )
		hist1 = ip1.getHistogram();
		newHist = matcher.matchHistograms(hist1, hist2);
	resStack.addSlice(imp.getStack().getSliceLabel( slice ), ip1 );
// Create result ImagePlus
result = new ImagePlus("Histogram-matched-" + imp.getTitle(), resStack );
result.setCalibration( imp.getCalibration() );

It will adjust the histogram of all your stack slices to the slice you select.

I hope it helps!


Subtract a Weka class

Hi Ignacio,

Thanks for the script. It helped a lot. Can you please tell me how I can crop a stack of images of Petri dishes so that it will be a square around them? I don’t want to do that like one by one.

Thanks :slight_smile:


Can’t you just select the square with a rectangular selection and click on Image > Crop?



Yes, I can but I have hundreds of images and not all of them are placed right at the center. So when I stack them and crop some of them fall out of the frame.

Original tif file: download

I have uploaded one of the images. I need to half a rectangle as close as possible to a square around the Petri dish but I do not want the edges to be tangent to the edges. Here is a post-crop image.

Original tif file: download

These two are two different images. I am using a program in R named “diskImageR” for processing them and this program can actually locate the disk. we can set a difficult because we always use the same disks in the center and the same 100mm diameter Petri dishes.

Thank you,


It looks like you need to first align all your images using any of the available registration plugins, and then crop.


Hello @iarganda,

I’m trying to run your macro in Fiji but I have this error:

" Undefined variable in line 7
< import > ij.IJ; "

Do you know what the problem is?

Thank you so much,


Hi Baptiste,

Have you chosen your language to be BeanShell? do so open the script editor using “[” then choose the language then paste the script and it should run. I had the same problem.


Hi Siavash,

Thank you, that’s working properly now! :slight_smile:

I have a question by the way: did you normalize your images to the more average image you had, to avoid important modifications ?
How did you choose the reference slice?

Thank you for your help,


Hello @Baptiste,

Yes, in principle you should select as reference the image that can be the best representative of your set of images. I would select the one with the most diverse content and better contrast.


Hi @iarganda,

Thank you very much for the advice.
Do you think I could automate this selection process with a macro?

It’d need a function that can estimate the content diversity and select an image according to it contrast.



As a simple approach, you could get the mean and the standard deviation of the intensity of each image and select the one with the most common mean and largest standard deviation.


@iarganda That’s indeed the simplest way, thank’s!


Hi @iarganda,
Sorry for asking your help again.
I created a very simple macro to obtain “the mean and the standard deviation of the intensity of each image and select the one with the most common mean and largest standard deviation”.

 //Get the Result Table
 if (nSlices>1) run("Clear Results");
       getVoxelSize(w, h, d, unit);
       n = getSliceNumber();
       for (i=1; i<=nSlices; i++) {
           getStatistics(area, mean, min, max, std);
           row = nResults;
           if (nSlices==1) setResult("Area ("+unit+"^2)", row, area);
           setResult("Mean ", row, mean);
           setResult("Std ", row, std);
           setResult("Min ", row, min);
           setResult("Max ", row, max);
 //Get the stackMean
 sum =0;
 for (d=0; d<=nResults-1;d++) 
 sum= sum + getResult("Mean ", d) ;
 print("stackMean = ", stackMean);

 //Get the maxStd
 for (d=0; d<=nResults-1;d++) 
 if (maxStd<= getResult("Std ", d)) maxStd=getResult("Std ", d) ;
 print("maxStd =",maxStd);

 //Determining which image is representative
 for (f=0; f<=nResults-1;f++)
 	setResult("optMean",f, ((getResult("Mean ", f)-stackMean)*(getResult("Mean ", f)-stackMean)));
 for (f=0; f<=nResults-1;f++)
 	setResult("optStd",f, ((getResult("Std ", f)-maxStd)*(getResult("Std ", f)-maxStd)));
 for (f=0; f<=nResults-1;f++)

However, I need to determine which parameter (between the mean or the standard deviation) is the more significant in aim at optimizing my image selection. Do you know how I could weight my optimization function (i.e this part : getResult(“optMean”,f)+getResult(“optStd”,f))

Thank you in advance for your kind help,



Hello @Baptiste,

I edited your post to enable code highlighting. Nice macro![quote=“Baptiste, post:16, topic:3914”]
Do you know how I could weight my optimization function (i.e this part : getResult(“optMean”,f)+getResult(“optStd”,f))

I guess you have to experimentally find the values. Why don’t you start by weighting the function using a constant alpha and (1-alpha), and have a look at the results using alpha = 0.5 and around that value?


Hi @iarganda,

I actually modified it a bit to have a proper optimization function:
I now have :

(getResult("Mean ",f)-stackMean)*(getResult("Mean ",f)-stackMean)/(stackMean*stackMean))).

This is an application of the least square method, with (a-b)^2/b^2 a correct unbiased estimation of my error, that enable efficient comparison.

I actually want to know which parameter (between Mean and StD) is the more accurate when I want to select an image for histogram matching. Are you sure I can determine it experimentally? I don’t think so, because I don’t know which image is the best: I’m looking for it!



Sure, but you have some intuition. For example, by visually inspecting the chosen image, you will see if the large standard deviation is due to artifacts in the image (such as bleaching, over-saturation, etc) or to real content diversity. That’s what I meant when I said “experimentally find the values”. Your script will point you to the theoretically best candidates and you will choose based on your knowledge. Does it make sense?


@iarganda Yes it does make sense, thank you so much.

I just checked and it’s not very obvious to be honest (I don’t have such artifacts, and it’s difficult to estimate what content diversity is the most valuable, since I’m working on brain sections).

So, for those who face the same problem as me, I recommend putting a higher weight on the optMean (estimation of the error of each image in relation to mean), somehow 0.9 (against 0.1 for the optStd), because often, your image with the highest diversity content isn’t the best candidate (in my case). But you have to experimentally try some values.

Thank you once more @iarganda for the help and solving this problem with me, that was cool!