Do you need dark frames?

Updated: 06/04/2007



Almost from the beginning, we have been taking dark frames to subtract from the light frames as part of our calibration workflow, at least with most sensors.  During a recent evaluation of the Apogee U16M, I examined dark current generation and statistics.  Given modern image acquisition and combining techniques, I began to wonder if this trip is necessary.



Dark current is generated within individual pixels and accumulates over time.  Kodak data sheets typically test the sensors by averaging all the pixels on the sensor during an un-illuminated exposure of unspecified duration and at 25C.  Dark current generation (dark signal) is reported in e/pixels/second.  A "doubling temperature" is also specified.  This is typically around 6.3C.  As an example, the KAF16803 used in the Apogee U16M has a specified typical dark signal  of 3 e/pixel/sec and a maximum dark signal of 15 e/pixel/sec.  If the sensor is operated at 18.7C, the dark signal drops to 1.5 e/pixel/sec.  Operating it at 12.4C drops the dark signal to 0.75 e/pixel/sec.  And so on...  The mathematical expression for this is given in this paper on CCD Operating Temperature.  So, for a KAF16803 operating at -15C, the expected typical dark signal is .037 e/pixel/sec.

But this is only part of the issue.  The average dark signal is easily skewed higher by hot pixels.  Assume 10 people are going to give you a donation.  Nine give you $10 and one gives you $1,000.  Your average or mean donation is $1090/10 or $109 per donor.  But a real representation of the average donation is around $10.  A median calculation is used to "de-skew" the data.  See this link for more on the meaning of median.

Dithering is a process of moving the camera slightly between images so that fixed pattern noise of the sensor, including hot pixels, have a lower possibility of falling on top of each other when we register the images.  Thus we can minimize the hot pixels in the image and are left with the lower dark signal distribution, represented by the median or even less.



Since there is a bias level for all sensors, we need to remove its effect on the statistics.  I subtracted a master bias from a master dark for the KAF16803 sensors.  Here are the results, as measured in CCDStack:


Here we see the mean and median value as well as the very large standard deviation, STD.  This indicates a wide variation in values as well as some skewing of the mean upwards due to hot pixels.  Working backwards from this data to the specification with a measured gain of 1.4, the dark signal would be 0. 037 e/pixel/sec * 600 sec. or 22e.  22e * 1.4 is 28.5 ADU for the mean.  So this sensor seems to have reasonable agreement with the spec based on the mean of 33.68 reported above, but note the median is considerably below the mean.

This camera has a read noise of 9.6e.  The noise corresponding to the median, not the mean, would be SQRT (20/1.4) or 3.8e.  The combined read and dark noise is 10.3e.  Now if the sky noise is 100e or so, the effect of the read and dark noise is essentially overwhelmed by the sky glow.  In this case, the sky glow was 2200 ADU or 3000 e with a noise contribution of 55 e.  So theoretically, there should be some added noise.  However, with multiple sub-exposures and dithering and rejection, this effect will be minimized, since even the warm pixels don't always appear in the same location for mean combine, i.e. their value after registration is no longer randomly distributed.

Now, to properly get hot pixels to disappear, three things are needed: correct dithering, proper combining and appropriate rejection algorithms.

For best results, dithering should be non-random to avoid statistical clumping and should not repeat the same location for the entire stack.  CCDAutoPilot's enhanced dithering algorithm meets this requirement.  The combining algorithm should "snap to" pixels, so that a hot or warm pixel is not smeared by resampling.  CCDStack's nearest neighbor algorithm does just that!  Finally, CCDStack's Poisson Sigma Reject rejects data that does not fit the normal distribution expected for photons and is excellent at removing outliers.



I took as stack of 12 x 10 minute unbinned sub-exposures of M101.  In one case, I did a standard dark subtract using a master dark comprised of 10 dark exposures.  In another case, I did a bias subtract only, using a master bias comprised of 10 bias exposures.  These masters were combined in CCDStack using min-max clip rejection, followed by a mean combine.

The processing consists of the following steps:

bulletDark or bias subtracted and flat field corrected
bulletRegistration using auto-star select and nearest neighbor alignment
bulletAuto normalize
bulletPoisson Sigma Reject, sigma multiplies: 2.5
bulletMean combine

 Here is an animated comparison of part of the two images with a 3 sec. pause between them:


Can you detect by eye which is dark-subtracted and which is bias-subtracted?  (They are identified at the bottom of this page.)  Meanwhile, here is some measured data from CCDStack:


The left image shows the region in the animation with the measurement box highlighted and the right shows the data from that box.  Note particularly the SNR values.  The SNR with the bias only subtraction is very similar (actually slightly higher) than that with the dark. 

I repeated this experiment with a KAF6303E, which has a considerably different dark signal and hot pixel distribution.  In fact, the sensor is out-of-spec according to the data sheet.  Nevertheless, similar results were obtained.  Here is the animation:


And here is the corresponding data:


In this case, the dark-calibrated SNR reports slightly better than the bias-calibrated one, but certainly very close.


A tentative conclusion for me is that, for broad band imaging, darks may not be needed and a suitable bias frame can be used with equivalent results.  This requires non-repeat dithering as in CCDAutoPilot's enhanced dithering mode and nearest neighbor combine and Poisson sigma reject as in CCDStack.  Since dark frames require exposures equal to the sub-exposure duration, and the noise in the darks is in large measure due to read noise, it may be more advantageous from a SNR perspective to use many more bias frames to make a master than can be conveniently made from darks.  Also, since the bias frames are not of course exposure dependent, one may need only master bias frames at the binning used for all exposures independent of sub-exposure duration, greatly minimizing the dark library requirement and maintenance. 

To see if you can use this technique, you should calculate your dark signal using the methodology described above to determine the median (not average or mean) dark signal for your particular camera and sensor temperature.  For a KAF6303E operating at -20C or a KAF16803 operating at -15C, and with my skies at 20.2 mag/arc-sec.2, dark frames do not appear to be needed.

When the average dark signal begins to approach 0.5% of the flat field exposure, it can impact the flat fielding accuracy.  For example, assume a 20,000 ADU flat.  If your average dark signal approaches 50 - 100 ADU, you should probably take use dark frames.

If you are doing narrow band imaging, there is little if any sky glow noise in which to bury the dark signal.  Also, narrow band sub-exposures are typically longer, leading to higher dark signal.  It would be interesting to perform an experiment similar to the above and see how the SNR's compare. 

This area needs further study with different cameras, filters, exposures, sensor dark signal and operating temperature, stack sizes, etc. 


In case you were curious about the animations, frame 1 is bias-subtracted only and frame 2 is dark subtracted.










Hit Counter