It has been a while since I updated my techniques description so I thought I would describe what I am currently doing. I am by no means an expert in image processing but I hope by sharing the techniques I use, you may get some new ideas. Some of the techniques come from other, more experienced imagers, like Rob Gendler, Russ Croman, Ken Crawford and Jerry Lodriguss, who have generously shared their techniques with the community. To paraphrase Issac Newton: "We all stand on the shoulders of giants." Some are my own and are offered here in that same spirit.
Disclaimer: I am the principal author of CCDAutoPilot3 and co-founder of CCDWare. Since I am describing my techniques, it is only reasonable I use products that I create or help sell. I hope you don't see this as some kind of thinly-veiled advertisement as that is not my intent.
|CCDAutoPilot version 3 for automated image acquisition|
|CCDSoft version 5 for camera control|
|TheSky6 for telescope control|
|FocusMax version 3 for focuser control|
|RCOS TCC for rotator and focuser control|
|CCDInspector for image assessment|
|CCDStack for most data reduction steps|
|Maxim for color combine|
|PhotoShop CS for final work|
|Kodak GEM Pro for noise reduction|
|Gradient Xterminator for gradient removal|
|PEMPro is used for Periodic Error Correction|
|AIP4WIN for deconvolution|
It all starts with getting a lot of good data. I generally image over two nights, although occasionally I can get sufficient data in one night. I image at 0.55 arc-sec./pixel with a blue enhanced, non-microlensed camera and use AstroDon filters. My RGB combine ratio is 0.8/1/1.2. I am currently imaging with all 10 minute sub-exposures with RGB most often binned 2x2.
Most of my imaging is being done guided by an off-axis guider which is ahead of the filter wheel. This gives a constant guide exposure independent of filter and, with the ST-402ME, the extraordinary sensitivity and FOV insures a guide star is always available.
|Focusing: I use SkyStar focusing, which is a facility of CCDAutoPilot.
SkyStar allows focusing off-target focusing so that a suitable focus star is
always available. The process begins with a plate solve of the target
location, a slew to a magnitude 4 - 7 focus star with optional plate
solving to center it in the FOV and then calling the FocusMax focus routine.
Up to 3 stars are tried, in the event of focus failure on the first one.
After focusing is complete, the telescope is precisely returned to the target
FOV within a couple of arc-sec. Plate solve success probability is
increased by the ability to specify a plate solve filter different from the
desired focus/exposure filter. I generally set the minimum focus
altitude to be greater than 60° and focus frequency every 60 minutes.
|Tracking and Guiding: I typically do guided imaging using DirectGuide,
looking for a guide exposure of 3 - 4 seconds and a minimum ADU count of 4000.
This is easily achievable with the ST-402ME since I can guide with a magnitude
12.7 star with 5 second exposures giving a 3000 ADU level with the guider
binned 2x2. I dither each exposure to remove artifacts by ±3 pixels.
Guiding minimum and maximum times are set according to this
I use CCDAutoPilot's Automatic Guide Star Recovery to allow the guide error to
get within 0.7 pixels in 20 or less guide cycles (typically 5), so that I
don't have to worry about programming any guider recovery delays. I use
automatic meridian flip so that I don't have to worry about meridian issues
and can time my exposures to that the clear filter series is split by the
meridian. The guide star is automatically reacquired upon flipping the
meridian and the series continues without user intervention. After
initialization, guider calibration is never required again no matter
where the rotator is or where in the sky the target is, unless the instrument
|Light Frames: My imaging sequence is typically 3R, 3G, 3B, 18C, 3B,
3G, 3R for galaxies, which are my favorite targets. The color sequence
is arranged in order of impact by atmospheric extinction. Since R is
impacted the least, it is imaged lower in the sky. Since B is impacted
the most, it is imaged higher in the sky. Sometimes I will go back the
next night for more data. This is easily accomplished by CCDAutoPilot's
ability to save a target list and go back to it precisely from night to night.
The 18C series is typically split by the meridian. I use a clear filter
instead of an IR-blocked Luminance when I am trying to catch as much faint
detail as possible. I use a Luminance filter on brighter objects.|
|Dark and Bias Frames: I typically use library dark frames comprised
of 16 dark frames min-max clip combined in CCDStack. I have masters for
1x1 and 2x2 binning and use -25° for most of the year. The number of
master darks can be determined analytically and
there is a spreadsheet and convenient
I typically refresh my library every two months.|
|Flat Frames: Given the high SNR of a flat frame, only one flat is required
to not impact the overall object SNR. (With my camera, a 20,000 ADU flat
corresponds to 44,000 e, which has a SNR of 210!) However, since I take
sky flats, there always is the possibility of a star of some undesired
artifact so I generally take 3 to allow artifact removal. I take flats
through each filter at the position angle of the target. For my location
and light pollution, I have found that I only need one orientation of red and
blue flat per target but need both orientations for green and clear filters.
All flats are taken at 1x1 binning for best results. CCDStack
automatically scales them as appropriate for 2x2 binned data. I take my
sky flats with tracking on and dither each flat exposure by 6 arc-min.
This allows any bright star that shows up to be removed during the combine
|A typical run consists of dusk flats for any flats that were missed the previous dawn and acquire some darks for the library until it is time for the light exposures to start. I program in some more darks to begin when the light frames are completed. These darks are interrupted automatically when it is time for the dawn flats to be taken. Once the flats are completed, the telescope is parked, the imager and guider coolers are turned off and a shutdown script is run.|
I will have one directory for each target with all the data frames in it. I will also have a calibration directory with all the flats and any darks in it. I use CCDInspector to assess the data for star size (FWHM) and tracking (aspect ratio). I discard any abnormally high FWHM's or excessively poor aspect ratios. Typical FWHM's for unbinned data is 1.9 - 2.4 arc-sec. and aspect ratios around 6 - 10, indicating satisfactory guiding. My next step is to reduce the data.
I am using the term "reduction" to mean the process of going from the raw data acquired above to master LRGB frames. This includes calibration, data rejection, registration, normalizing, more data rejection and finally combination. These steps are followed for each series - R, G, B and L and are all performed in CCDStack.
|Calibration: First I need to create master flats as appropriate. I
do this in CCDStack by bringing in the three flats and correct the bias.
Since I have a constant bias level of 3600 ADU, I use Pixel Math to subtract
that value from the three frames. I then normalize the flats, Poisson
Reject and Mean combine to make the master flat. Once the master flat is
prepared, I apply it and the appropriate master dark to calibrate the
corresponding light frames. If I am using different flats for different
sides of the meridian, I disable the images from one side while I apply the
dark and flat to the other side. I then reverse the process to calibrate
the first side. At this point, I have all calibrated images in the
|Data Rejection: Before going further, I use Bloom Removal to remove any
blooms, defined as any pixels whose value is greater than 50,000 ADU.
After identifying these pixels, I impute an appropriate value for these
pixels, based on surrounding data. This makes subsequent clean-up in
PhotoShop much easier. In my current situation, I have some subtle
column defects on my imager that I remove by the pixel math compiler feature
|Registration: I prefer star snap as the quickest and most accurate way to
register the dithered images. I select 4 - 5 reference stars, selecting
the last one near the center of the image, since that is the point of rotation
for registration. I then use Snap All to register the data. On the
Apply tab, I use Nearest Neighbor for registration of the luminance data,
since it does the best job of preserving the noise statistics and minimizes
the resultant star size. For the RGB data, I currently use Bi-Cubic B
spline to most accurately register the 2x2-binned color data.|
|Normalization: I generally select the target area for normalization.
Normalization is required for effective data rejection.|
|Data Rejection: Prefer to use Poisson sigma reject with top image 1%.
In the case of unwanted satellite or airplane trails, I have used the freehand
draw feature to trace out the errant trail to have it be rejected.|
|Combination: I sum combine the luminance data and average combine the color data.|
I save each master at this point in the process.
I register the RGB to the L in CCDStack by using the L as the reference and registering the RGB to it using BiCubic B spline. I then save the registered RGB masters.
I will generally deconvolve the luminance data, depending on the quality. I compare deconvolution in CCDStack to AIP4WIN and will use whichever one I like. This is a somewhat time-consuming process as each deconvolution can take as much as 20 minutes on my reasonably fast PC.
Until recently, the only acceptable software package for me for RGB combining was Maxim. I would subtract most of the background, leaving a level of 50 ADU for each color. I would add in an extinction-corrected color combine ratio and add the L at 60% for an LRGB combine. I then DDP the result to spread the usable color information over more of the 16-bit range, prior to saving as a 16-bit TIF for PhotoShop.
I have been experimenting with CCDStack's create color capability. After setting the extinction-corrected color combine ratios, I neutralize the background and generally am satisfied with the results. By careful adjustment of maximum, background an DDP, I can create an acceptable RGB master. Occasionally, a bright core will saturate and I will revert to Maxim for color combine.
At this point, I have a Decon and L .FIT files and either an LRGB or RGB TIF file.
For years I have been doing histogram shaping using Curves and Levels and still do to this day. My typical first step is a curve that looks like a hockey stick, with the puck end near the origin. After the first stretch, I put a sampler on the brightest part of the object to provide feedback against saturation. I repeat this 2 - 3 times, occasionally resetting the level to maximize the stretch range while suppressing the background. I will do this for RGB and luminance data.
If I don't have an LRGB already, I will make one in PS. Using the RGB as the background, I will layer in the L with the combine method set to Luminance. I will typically boost the color saturation by 25 - 30 points and set the L opacity to 50 - 60%. I will merge the layers and do either a 2 pixel Gaussian blur or a coarse detail noise reduction using GEM Pro. This now becomes my new RGB.
I bring in the L or Decon via the FITS Liberator. I lower the default background by around 100 counts to give some black level room and set the white level so that any object detail is not saturated, as indicated by the sampler. I occasionally will use the log stretch for better low level detail. I usually have to try this a few times to get an acceptable result. After importing, I flip the image to have it agree with the color data.
Once again, some more hockey stick action is applied to the Decon as described above. I then layer this into the new RGB, again as a luminance layer. At this point, I save this as a .PSD file, since this represents a "raw" starting point.
After saving, I will iterate the luminance curves and levels slightly, perhaps adjust the saturation a bit. Once this is done, I will flatten the image.
I focus on the object and ignore any star artifacts at this point. I duplicate the image into a layer, set the blending mode to overlay and select the high pass filter. Focusing on the high signal level of the target, I adjust the amount of high pass filtering to bring out any details without over-cooking it. Once this is done, I select a Layer Mask to Hide All. I then use the eraser tool to expose the mask, thereby showing any detail I want, without overdoing the lower signal levels, stars, etc.
I next apply GEM Pro in the fine detail mode to reduce background noise. There are controls that allow this to be done to taste.
Gradient Xterminator comes next, using high aggressiveness, fine detail and balance background color. See instruction for this plug-in. I find this much easier to use than the various gradient techniques described for PS but then perhaps my gradients are more mild than others.
Finally, I will touch up any stars that are excessively large using a technique Rob described to me a very long time ago. The image must be in 8-bit mode to use this tool, at least in my version of PS. Later versions may allow it to be used in Use the elliptical marquee tool. While holding down ALT and SHIFT, drag the mouse from the center outward. You will get a circular marquee. Feather the selection by a few pixels - I use 3 - 6. Use Filter | Blur | Radial Blur, Spin Method, Best Quality with an amount of 40 or so. This should round out the star. Then use Filter | Distort | Spherize with an amount of -60 or so to shrink the star. These values are rough guidelines only and some experimentation is always appropriate.
When I think I am done, I save the image and go away from it for a while. It is amazing what I see wrong when I come back after a couple of hours! I will very slightly adjust levels to get an average background of 15 - 20 counts and perhaps slightly adjust curves to my liking. I then consider the image ready for public review.
What I have written above describes my general approach. There are others much more accomplished that could add reams to what has been written - some already have. Hopefully this will give you some ideas for your own imaging work.