Noise and Sub-Exposure Consideration

Updated: 07/13/2007



We are all familiar with the concept of determining the magnitude of our readout noise and calculating the minimum sub-exposure needed to overwhelm readout noise.  This is done with the intent of minimizing noise in general.  With the current crop of CCD cameras, this can lead us to sub-exposures of 10 minutes or so, depending on our sky glow.  This in turn requires precise guiding and we all suffer the occasional lost sub-exposure due to clouds, satellites, airplanes and the like.

Given some recent work I did with the evaluation of dithering and followed by evaluation of various combine methods, including Ray Gralak's Sigma combine, I got to wondering if there might be some better way to optimize SNR (Signal to Noise Ratio).  Sophisticated combine methods do better with a reasonably large number of sub-exposures, the more the better, in order to reject hot and cold pixels below the threshold of normal filtering, cosmic ray impacts and other transitory events, even satellite trails.  Wouldn't it be nice if we could effectively increase the number of sub-exposures without paying too high a price in loss of SNR.

I recently posed this question and Al Kelly developed an intriguing spreadsheet for SNR calculation, where the  desired signal was the sky background itself!  Al started with an exposure whose equivalent exposure time was such that the sky noise corresponded to the readout noise.  He then proceeded to calculate the SNR for increasing number of sub-exposures and different durations of sub-exposures.  I ran a number of cases and came up with the following curve.

This plot shows the impact of doubling the number of exposures and halving their duration, when compared to the recommended background level to overcome readout noise, as referenced above.  The basic idea is that you can double the number of sub-exposures you would normally take at half the duration and suffer a modest SNR penalty.  The penalty one would suffer is a function of how dark your skies are and how low your readout noise is.  In fact, with typically one more sub-exposure, you can get back to the original SNR.  Why would you want to do this?  Primarily to take advantage of the more advanced combination tools available to reject unwanted artifacts, whether they be cosmic ray hits, camera defects, hot and cold pixels or miscellaneous satellite trails.  Perhaps an example will make this more apparent.

I use an SBIG ST-10XME with an 12.5" F/9 Ritchey-Chrtien OTA in suburban skies.  My sky brightness has been as low as 20.2 mag/arc-sec2.  My ST-10XME has a readout noise of 10 e- and a gain of 1.4 e-/ADU.  On a typical night, a 10 minute 1x1 binned exposure gives an average background count of 1050 ADU.  My equivalent electron count is:

1050 ADU/10 min. x 1.4 e-/ADU or 147 e-

The noise associated with this background is simply the square root of the count or 12 e- rms.  This is a factor of 1.2 times my readout noise of 10 e-.  So, my sky equivalence readout noise is 0.8 minutes.  I typically image with with 10 minute sub-exposures so my Readout Noise Equivalence is 10/.8 or approximately 12.  Looking at the above plot for 12, the curve SNR loss value is approximately 3.8%.

What this means is I will suffer a SNR degradation of 3.8% if I were to go from say 5 sub-exposures at 10 minutes each to 10 sub-exposures of 5 minutes each.  Further, I can make up that SNR loss by an single additional 5 minute sub-exposure.

Changing my acquisition strategy to this approach has a number of advantages and a couple of disadvantages.  On the plus side:

bulletMore sub-exposures available for advanced combining routines
bulletA higher possibility of unguided exposures
bulletLess damage due to satellite trails, airplanes, cosmic rays
bulletTake better advantage of dithering
bulletBetter preservation of star colors
bulletLess blooming

On the minus side:

bulletMore files to deal with
bulletMore download time for more files
bulletSlightly longer total imaging time to preserve the same SNR

I normally take at least 6 and ideally 9 clear filter exposures for the luminance information.  However, for color, I am more likely to take 3 or 4 sets of 2x2 binned data.  With this approach, I can not only increase the number of luminance frames, thereby increasing the improvement due to advanced combining and dithering techniques, I can also get significantly more color frames in roughly the same period of time.  I expect this to lead to higher quality color data.

Summarizing my data acquisition strategy, I plan change from the left column to the right column, below:

Overwhelm Readout Noise
Maximize Sub-Exposure Number
L: 8 x 10 min. L: 17 x 5 min.
R: 3 x 10 min. R: 6 x 5 min.
G: 3 x 8 min. G: 6 x 4 min.
B: 3 x 13 min. B: 6 x 7 min.

In addition to the extra L frame of 5 minutes, I will pick up 9 additional 1x1 and 9 additional 2x2 download times for an additional 2.5 minutes, lengthening my overall data acquisition time by 7 minutes or so.  I believe this will give improved results and will report results as they become available.


1/13/2004 Experiment

To test this theory, I took some shots of M1.  While east of the meridian and with the moon not having risen, I took 6 exposures at 10 min. each, binned 1x1, with my system as described above, AO7 guided.  The image scale is 0.49 arc-sec./pixel.  I then followed with 13 exposures at 5 min. each, also binned at 1x1, with the same equipment.  By the time of the second set of exposures was started, the 5-day old moon had risen but it was 95 away from M1.  All frames were acquired with Sequencer II, using a dither value of 3 pixels.  The AO7 was operating at 10 Hz.

The resultant frames were reduced and aligned in Mira with 32-bit processing.  Reduction consisted of direct subtracting a library dark for each exposure time.  The library dark was a Sigma combine of 40 individual frames.  The flat frame was a min/max clip combine (Mira) of 9 flats at each camera orientation.  Flat SNR exceeded 400:1.

The resulting calibrated frames were combined in Ray Gralak's Sigma.  Recommended default values were used fo smoothing and normalization.  The Standard Deviation Sigma Threshold was adjusted to get the resultant usage close to 98%.  This resulted in a Sigma of 0.8 for the 6 x 10 minute combination and 0.5 for the 13 x 5 minute combination.  No further processing was undertaken for the .FIT data.  A cropped version of the reduced and combined .FIT data is available here, if you wish to examine the data.

The combined 10 min image was scaled by 50% in Maxim to approximate the 5 minute exposure time.  They images were then stretched and equalized for .jpg presentation below.


6 x 10 min.                                                               13 x 5 min.

I measured SNR, defined as the ratio of mean to standard deviation, at three points on the FIT image.



6 x 10 minutes 13 x 5 minutes
Backbground 89 123
Faint area 20 28
Brighter area 14 18

For this experiment, there appears to be no loss of SNR by following the above recommendations.  Indeed, there appears to be a slight gain.  Even though the combined exposure time in the 13 x 5 case is 8% longer than the 6 x 10 case, it is not enough to explain the increased SNR.  Indeed, it appears that significant SNR improvement can be achieved with the combination of dithering and statistical combine methods, such as Sigma.  Further, it appears we can get closer to unguided imaging by using these techniques.

Based on this somewhat unexpected SNR increase with shorter exposures, it appears that, at least for my system and skies, there appears to be some SNR headroom that will allow even shorter exposures and perhaps getting closer to unguided, high resolution imaging.  Alternatively, it appears the SNR with perhaps 11 x 5 minute exposures would result in similar SNR without any additional total exposure time.

Here is the resulting color image, which adds 4x5m Red, 4 x 4m Green and 4 x 7m Blue, all at 2x2.  Only DDP processing and USM in Photoshop was used.

M1 - Total Exposure was 129 minutes at F/9



Given my approximately Mag 20/arc-sec2 skies, there appears to be no loss in SNR, when collecting data at a frame exposure shorter than the readout-noise-overwhelmed recommendation.  It appears that by using careful data reduction techniques, one can indeed take more, shorter sub-exposures and not suffer a loss in SNR.  This approach has the added advantage when taking RGB data in that more frames allow better Sigma reject processing.

To summarize these techniques:

bulletUse floating processing for all calibration and combination
bulletBuild a significant library of dark and flat calibration frames
bulletUse standard deviation processing techniques such as Ray Gralak's Sigma or Mean-sigma clipped in Mira 7.

Further experimentation by others with other sky conditions would be welcome. 


1/14/2004 Update:  Stan Moore has evaluated the FITS files for the above two cases used a different method to calculate the SNR.  His result indicates the 6 x 10m stack has a 2%(!) higher SNR than the 13 x 5m stack.  The download "penalty" for a USB equipped ST-10 is an additional 70 sec.  So, for a total of 6.1 minutes more, you have the flexibility of shorter exposures with negligible 2% decrease in SNR and an arguably increase in overall appearance.  This pretty much agrees with the theory discussed above.  A 1% decrease in SNR was predicted for the 13 x 5m case, based on Al's model.  I submit that you should evaluate this for yourself using your skies and your systems.  It may be worth the exercise.


1/24/2004 Further Thoughts: After thinking about some recent discussions, here are some further observations:

The focus of this paper is on acquired SNR.  Due to the fact that noise adds in quadrature and signal adds linearly, then more frames will always improve the SNR.  This paper identifies the SNR penalty, or slight increase in exposure time penalty, when shorter duration sub-frames are used, along with the post-acquisition advantages.  Nothing more and nothing less.

I doubt there is any processing technique that can change the ratio of sky glow to object.  Photons are photons, whether they come from sky glow or an extended object.  If the faint area's brightness is below the sky glow, you will never detect it, even with an infinite SNR detector.  However, if the faint area is slightly above the sky glow, then any technique that reduces the noise in the image improves the Object to Sky Ratio (OSR) by allowing more stretching of the signal and effectively treating the sky glow as a bias or offset.  Whether more longer frames, more x 2 shorter frames, both techniques will improve the OSR.  The only difference between the two is around 2-4% SNR loss, which can be mitigated with one more short exposure.  So why bother with more, shorter exposures?

Post-acquisition techniques that increase the apparent SNR of the sky glow, and perforce the object, will allow more stretching before the threshold of unacceptable noise is reached.  Of course, the post-acquisition challenge is to increase the SNR without sacrificing resolution.  One advantage CCD imaging has over real-time signal acquisition situations is the data is nominally stationary, i.e., time invariant.  This allows multiple detection techniques to acquire the data and subsequent combination techniques to improve SNR.  Classical averaging, which is nothing more than a sum divided by the number of samples, is the most elementary technique.  Next, median combine is used to eliminate outlying data, such as cosmic ray hits, satellite trails and the like.  Why stop there?  Through the use of standard deviation clipping techniques such as Sigma, Sigma Clipping and the like, if used judiciously, can further enhance the data detection threshold.  In all cases, more frames give better SNR.   So why bother with more, shorter exposures?

The statistics of advanced combination techniques do require more samples to be effective.  A sample based on 3 or 4 data points is not statistically significant.  6 samples starts to get there.  In applied statistics, 30 samples is said to be statistically significant, i.e., the processed results of a sampled data system will fairly represent the data.  Michael Newberry, author of Mira AP states in the version 7 help topics: "Sigma clipping discards high and low extreme values in a way you can control with the clipping parameters. This method requires a large number of images, on the order of 20 or more in order to compute good clipping criteria at each coordinate."  Lower number of samples are always a compromise.  But the compromise is always better with more samples than less.  Below a certain point, 3 for median and 6 for standard deviation clipping, you are actually worse off. 

And that is why one bothers with more shorter exposures.

Of course, since the results of standard deviation clipping is somewhat unpredictable, it is probably more applicable to esthetic imaging as opposed to photometry for example.



Thanks to Al Kelly for his work on the SNR spreadsheet that provided the analytical impetus to this work.  And, of course, thanks to Ray Gralak for making such a powerful tool as Sigma available. 


Hit Counter