Differences between revisions 16 and 17
Revision 16 as of 2015-07-28 22:18:18
Size: 46915
Comment:
Revision 17 as of 2015-07-28 22:20:49
Size: 15346
Comment:
Deletions are marked like this. Additions are marked like this.
Line 1: Line 1:
= Calibration using IRAF = = A quick guide to data analysis =
Line 3: Line 3:
/!\ '''To all the students currently working on data reduction:''' a (new) major revision was done on 29-07-15, 00:00. Explanations are a bit more clear (especially for those which never worked with IRAF or command lines), some tricks are detailed, and the workflow has been modified (bias subtraction has been added). If you are experiencing trouble with your reduced data (results of insufficient quality, etc), it may be worth to have a look here again. == Data reduction ==

=== Why the data need calibration ===

Images of an astronomical object taken with a CCD camera will include many unwanted signals of various origin. The goal of the calibration is to remove these effects, so that the pixel values of the calibrated images are an accurate representation of the sky light that fell on the camera during your observations.

This quick tutorial will work you through the standard reduction steps, with an example of analysis with IRAF.

=== Effects to correct ===

  * '''''Bias ''''' When the CCD images are read, a small voltage is applied to all pixels. This electrical offset is called ''bias''. It has to be '''subtracted''', so that the zero points of the CCD output and the pixel-value scales coincide.
  * '''''Dark Current''''' The thermal agitation of silicon in CCD produce electrons, even when no light fall on the CCD (hence the name 'dark' current). Dark current is always present, i.e. also in your sky images. It scales linearly with exposure time, at a rate dependent on the CCD temperature, and needs to be '''subtracted'''
  * '''''Non-uniformity (flat-fielding)''''' The conversion of light to electrical signal on the camera varies with position, both on small scales (pixel-to-pixel), due to changing quantum efficiency of individual pixels, and on larger scales, because of vignetting and dust shadowing. To correct these effects, we need to '''divide''' the data by the normalised sensitivity of each pixel. This is estimated by observing a (theoretically) uniform source of light. The variations observed in this exposure, called a '''flat-field''', are a measure of the non-uniformities of the camera. Note that the flat-field(s) need to be ''first'' corrected for the two effects described above.
  * '''''Cosmic rays''''' Cosmic rays (CR) produce a stream of high-energy particles that interact with the camera, leaving bright spots or tracks in the images. By combining multiple images using the average, or better, the ''median'' of the pixel values at each pixel, the extreme values produced by CR hits are easily spotted and removed. The image combinations are often done with a rejection algorithm to remove extreme variations. If an obvious CR track remains in a combined image, it is better to find out from which individual image it originate, and remove the CR prior to combining the images.
Line 6: Line 19:
The text in verbatim are IRAF commands used, or their output. Lines starting with "#" are comments. Be sure to insert the correct filenames, those of your files! (Make use of the linux wildcards * and ?) The workflow follows the analysis sequence described in the [[ForPraAnalysis|Quick Analysis Guide]].
The official documentation for CCD reduction within IRAF is e.g. [[http://iraf.net/irafdocs/ccduser3.pdf|the guide by Philip Massey]].
=== What do you need for data reduction ===

To calibrate your sky data, you will need a set of ''calibration frames''. They should be taken close (in time) from your sky observations, ideally during the same night.

The minimum required is:
   i. A set of '''bias frames'''. These are zero second integration exposures (the shutter remains closed). In principle the bias is constant, but statistical fluctuations and interferences introduce some noise. It is best to combine several (e.g. 10) bias frames. The root-mean-square noise will decrease as the square root of the number of bias frames.

  i. A set of '''dark frames'''. These are long exposures taken with the shutter closed. You can either match the exposure time of the dark frames to the exposure time of you sky observations, or used a ''scalable'' dark frame, from which the bias has been subtracted. The second option is more flexible. Take a set of dark frames with increasing exposure times (from short to long). Here combining the dark frames will mostly help to remove CR hits (high-energy particles do not "see" the shutter...).

  i. A set of '''flat fields'''. These are images of a uniform source of light. Usually the twilight sky is the best choice. The exposure time should be set so that the pixel values is a good fraction (20%-50%) of the full well capacity. For the GOWI camera you should aim for ~20 000 counts. These good statistics allow to reveal the desired level of detail. An automatic sequence to produce twilight sky flat-fields is available. Note that because vignetting and CCD sensitivity are colour-dependent, flat-fields ''must'' be taken with the same filter as that used for the image to calibrate. As before, several exposures are taken to be combined.


=== In practice ===

==== Typical calibration sequence ====

1. Look at what you have:
   Sit down and sort your data. If you have taken images of several objects, it is best to keep the calibration frames, which will be used for all objects, separated from the objects data. Make sure you know what files are bias frames, dark frames, flat-field (in what filter), etc... Ideally, should be clear if you used a proper naming convention for your files.

2. Prepare the master bias frames
   Combine all the bias frames in one image (the master bias). You can either take the average or median of each pixel value. The first method reduces random noise more effectively, the second is better at excluding abnormal (extreme) values.

3. Prepare a master (scalable) dark frames
   First, subtract the master bias from all your frames. Then, combine all dark frames. The resulting master dark frames represent the dark current obtain during <t>, the average time of the exposure times of all the dark frames that have been used. If you prefer not to subtract the bias separately and use a scalable dark, you can combine all dark frames with the same exposure time, without subtracting the bias.

Check (by ''looking'' at the data) that the master dark frame has no remaining CR. The averaging (in particular with median) should have removed all of them. If one CR feature remains, find which individual dark frames it comes from. Either remove the CR hits from that image, of exclude it from the master dark frame.

4. Prepare the master flat-fields
   First, subtract the master dark frame, scaled by the exposure time ratio of the flat-fields to that of the scalable master dark, or the master dark having the same exposure time as the flat field, from all your flat-field frames. Then, combine the dark-subtracted flat-fields ''separately for each filter''. At the end one obtains the master flat-field for each filter.

5. Process the raw data

   a) Preparation
      Again, have a first look at the raw data. What can of object do I have? What is the exposure time, the filters used? It is a good idea to have a backup copy of your raw data, and do the calibration in a separate directory than the one with the (processed) calibration frames. Copy the necessary calibration frames in the working directory of your choice.

   b) Subtract the master bias
      This is done for all images and is neither exposure time nor filter-dependent.

   b) (bis) Subtract only the master dark
      If you choose not to subtract the bias and dark separately, you can use a non-bias-subtracted master dark ''having the same exposure time'' as the science data. Then skip step c). and go directly to flat-field correction.

   c) Subtract the scaled master dark frame
      Subtracted the master dark frame. The dark should be scaled to match the exposure time of each raw sky exposure.

   d) Divide by the corresponding flat-field
      Divide the dark-subtracted images by the master flat-field taken with the same filter. The flat-field should be normalised by its mean, so that the mean value of the sky images remain unchanged. You can either produce normalised master flats (in step 4) or scale the flat-field by its mean when doing the division.

   e) Align and stack the calibrated images
      It is common practice to split the total desired exposure time into several exposures. This allows a good rejection of CR, avoids sky-tracking uncertainties, which leave star trails if the telescope do not follow the sky rotation accurately enough, and can be used to avoid saturating the brightest stars in the field. All calibrated images of one object in one filter are then stack. If the pointing is changing (even slightly) from one exposure to another, it is necessary to align them first on a common grid, to avoid a "blur" in the combined image. This can be easily done by matching the position of several bright object in the field.

   f) Look and check
      Be sure to check the final images and compare to raw data: Is there remaining CR? Are stars more blurry in the final image than in raw data? Are there remaining large-scale gradients (incorrect flat-fielding)? Redo the necessary step accordingly. The calibration is now done! You can go on with the '''analysis'''.

==== A working example with IRAF ====

[[/IRAFexample| On this page]], we show an example of image calibration using IRAF.
Line 10: Line 78:
'''1. Preparation''' == Photometry ==

Photometry is the technique of measuring the brightness of astronomical objects. To do so, we quantify the signal from a given source collected on our camera during the exposure time. In other words, we only measure the '''instrumental flux''' of a source. To measure the intrinsic luminosity of an object, we need a flux calibration (how much signal is recorded for a given number of source's photons) and a knowledge of the distance and extinction towards the source (see e.g. [[ForPraSternenhaufen| description of the star cluster experiment]]).

Various methods are available to perform photometry.
  * The simplest is '''aperture photometry''': the signal is integrated over a circular aperture defined around the star of interest. A second region, usually a concentric annulus around the circular aperture, is used to estimate the sky background. The background signal is scaled to account for the relative size of the star and background extraction region, before being subtracted from the source signal.
  A number of problems limits the use of aperture photometry for some purpose:
    * Choice of aperture: An aperture too small will ignore the fraction of the flux of a star that is in the wings of the PSF. An aperture too large will include noise from the sky background.
    * Crowding: It becomes increasingly difficult to define source and background regions in ''crowded'' fields, where stars are close to one another. In some cases (poor seeing, globular clusters...), aperture photometry might even be impossible, because stars are blended and cannot be separated anymore.

  * The way to go beyond the limits of aperture photometry is to performed the so-called '''PSF-fitting photometry'''(or PSF photometry for short). There, a model of the point-spread function (PSF) is determined for an image using some representative stars. This model is then fitted to all the stars detected in the image. It is then possible to know the contribution from an object and from the sky in a given region. Even if several stars are present in that region, their signal can be separated by fitting the PSF to each of them. There is also no concern for the size of an aperture, because all the signal under the PSF can be integrated. This method is of course more demanding than a simple aperture photometry. In particular, it requires a careful job while preparing the PSF model.

For the [[ForPraSternenhaufen|star cluster experiment]] the use of PSF photometry is strongly advised. This will increase the number of stars you can measure in globular clusters. The software used for PSF photometry is called '''DAOPHOT'''. It was developed by [[http://cdsads.u-strasbg.fr/abs/1987PASP...99..191S|Peter Stetson (1987)]] and is a package of the IRAF environment.
Line 13: Line 93:
In the example below, we assume that all our data is in a directory called "Data", and the calibration files (bias, dark, and flat frames) are in a subdirectory called "Calib", where we want to prepare the master calibration frames. === In practice ===

Below we describe a typical analysis sequence with DAOPHOT. The goal is to use calibrated images to perform PSF photometry and obtain a list of magnitude for all stars in the images.

1. Make an initial star list with a source detection algorithm. This typically search for local density enhancements with peak amplitude greater than a given threshold above the local background.

2. Perform aperture photometry on the detected objects. This gives a rough estimate of the magnitude of the stars and helps in choosing "PSF stars" (see below).

3. Select "PSF stars", i.e. stars that will be use to build the PSF model. '''Having a good, realistic PSF model is the critical step to guarantee the success of the photometry'''. Therefore, selecting isolated stars with enough statistics is essential. There should be enough stars (between 5 and 30 for instance), distributed across the field of view to account for possible spatial variation of the PSF.

4. Compute the PSF model using the PSF stars. Various functional forms exist for the PSF model, with different number of parameters. In addition, the model can be constant across the image or vary with position.

5. Fit and subtract the current PSF model from the PSF stars. If the model is appropriate, the PSF stars should be cleanly subtracted from the image. If not, either adapt the list of PSF stars (e.g. add more stars, remove those who do not subtract out cleanly, etc...) or change the model used (e.g. a more complex model, or varying with position).

6. If the current PSF model is satisfactory, fit the model to all detected stars. Thus, stars with nearby neighbours can be separated and have accurate photometry.

7. (Optional) If needed, a second source detection can be ran on the fitted-star-subtracted image to search for faint stars not detected in the first run.

==== A working example with IRAF ====

[[/IRAFexamplePhot| On this page]], we show an example of PSF photometry using DAOPHOT in IRAF.
Line 16: Line 116:
If you are working on a virtual machine as those provided by the supervisors of the lab course, then it is better to start iraf within the "iraf_work" directory, where a "login.cl" file is located. (As described in the following example). Otherwise, you can go directly to your data directory. In that case, skip the first line of the example. --------

== Final analysis ==

Once photometry is done, the final step is to create scientific products (plots, light curves, spectra...) and use them to draw conclusions from the results. In the framework of the [[ForPraSternenhaufen|star cluster experiment]], the scientific products to make are colour-manitude diagrams.
Line 19: Line 123:
First, we go to the iraf directory and we start iraf by typing: === In practice ===
Line 21: Line 125:
Here we describe the typical sequence to follow to produce a reliable colour-magnitude diagram with using the TOPCAT software.
Line 22: Line 127:
{{{
cd iraf_work
cl
}}}
1. First, you need to create a tabular list containing all your sources in every filter, with information such as position, magnitude and error.
Line 27: Line 129:
2. Feed the software with these tables and perform a cross-matching of sources of different filters (different algorithms). It will produce a concatenated table with all the successful source pairs and all the available information.
Line 28: Line 131:
We are now within the IRAF environment. Usual shell commands like "cd", "ls" are available to browse files and folders. (You can leave the IRAF environment by typing "logout".) We can now start the reduction of the data. At first, we need to set some parameters and file lists for our convenience. 3. The graphical interface of TOPCAT allows fast-ans-easy plotting of the tables. A colour-magnitude is a plot of the star magnitude in a given filter over a given colour index. Provide one the magnitude columns to the Y axis and the difference between the two magnitude columns for the X axis.
Line 30: Line 133:
4. The software also allows easy data handling. You can define subsets by hand (literally, drawing it on the plot with the mouse) or using algebraic expressions. A good thing is to reject every source having a too bad chi^2 (because of improper PSF fitting) and/or a too large magnitude error. Optional but ''highly recommended''.
Line 31: Line 135:
{{{#!cplusplus
# Set the correct image size (4096x4096 pixels):
reset stdimage=imt4096
5. you can directly apply any photometric correction you want to the tables without overwriting and play around with them TOPCAT. So you can apply corection for the galactic reddening and extinction.
Line 35: Line 137:
# To load the necessary packages type in one after another:
noao
imred
ccdred
# (NOTE: You must type these three commands every time you restart IRAF if you want to do data reduction. IRAF packages are organized in a tree structure, and browsing through packages is like browsing through directories, except you come back to the beginning at each restart.)
==== A working example with TOPCAT ====
Line 41: Line 139:
# Go to your calibration folder (replace <calib directory path> with the path of the calibration directory, in this case, it is "Data/Calib")
cd <calib directory path>

# Create a list of the calibration files (you can display the files directly on the screen by typing "files *fit")
files *fit > calibrationframes.list

# Check the list
cat calibrationframes.list

# GENERAL REMARK: the IRAF software does not like names starting with a "-". It is a special character used for options in command lines. If you have image files named "-001G60.fit" or something, rename these files sothat their names start with everything but "-". It will avoid you lots of errors.

# Create separate lists for flats, bias and darks (assuming the prefixes "DARK", "FLAT" and "BIAS")
cat calibrationframes.list | grep "DARK" > darkframes.list
cat calibrationframes.list | grep "FLAT" > flatframes.list
cat calibrationframes.list | grep "BIAS" > biasframes.list

# (If necessary, you can edit the lists using any external text editor outside IRAF. Useful if your bias frames are named like "DARK_000s"...)

# Set image type in IRAF-compliant name (zero, dark, flat, object...) for every calibration frame. These are usually already set, but better checking if they are correct. Otherwise, they will not be compiled by IRAF.
# You can read-out the header using: imheader <name> l+

# GENERAL REMARK: at any step where filenames are required, you can type them separately (separated with commas, no spaces) or you can provide a list you created using the character "@". In this example, we prefer the latter for convenience.


hedit @flatframes.list imagetyp "flat" update+ verify-
hedit @biasframes.list imagetyp "zero" update+ verify-
hedit @darkframes.list imagetyp "dark" update+ verify-

# Check image types
ccdlist @calibrationframes.list ccdtype=""
}}}

-------

We here have a bifurcation in the workflow according to what kind of calibration frames you have. If you have dark frames for all of your observations (i.e., calibration frames with ''exactly'' the same exposure time of ''every'' frame), then you can apply the direct master dark subtraction: bias and dark current will be accurately subtracted, leaving no flaw. If you don't (for instance, if you did not take any calibration for your flat fields), then you have to apply separate bias and dark current subtractions in order to achieve the same quality of your final results.

If you can offer to perform direct dark subtraction, please refer to Section 2. If you need to perform separate bias and dark subtractions, please refer to Sections 2' and 3. (We describe both procedures with the most general case as an example, i.e., a set of scaleable dark frames. you can easily adapt the reduction sequence for every exposure time you have.)

Afterwards, go back to the standard data reduction sequence, from Section 4.

-------

'''2. Direct master dark subtraction: prepare the master darks'''


First, create several dark frame lists according to exposure time and camera temperature (after having checked what you had). Name those lists with self-explanatory names. Then, combine the dark frames with the task "darkcombine".

{{{#!cplusplus

# Create the dark lists. Here is a first manual way to do it.

files <insert filenames of darks with 1s> > dark_1s_-25deg.list
files <insert filenames of darks with 2s> > dark_2s_-25deg.list
files <insert filenames of darks with 3s> > dark_3s_-25deg.list
files <insert filenames of darks with 5s> > dark_5s_-25deg.list
files <insert filenames of darks with 10s> > dark_10s_-25deg.list
files <insert filenames of darks with 20s> > dark_20s_-25deg.list
files <insert filenames of darks with 60s> > dark_60s_-25deg.list
files <insert filenames of darks with 120s> > dark_120s_-25deg.list
files <insert filenames of darks with 180s> > dark_180s_-25deg.list
files <insert filenames of darks with 240s> > dark_240s_-25deg.list


# Another, more automatic way (assuming the filenames contain information about exposure time):

cat darkframes.list | grep "001s" > dark_1s_-25deg.list
cat darkframes.list | grep "002s" > dark_2s_-25deg.list
cat darkframes.list | grep "003s" > dark_3s_-25deg.list
cat darkframes.list | grep "005s" > dark_5s_-25deg.list
cat darkframes.list | grep "010s" > dark_10s_-25deg.list
cat darkframes.list | grep "020s" > dark_20s_-25deg.list
cat darkframes.list | grep "060s" > dark_60s_-25deg.list
cat darkframes.list | grep "120s" > dark_120s_-25deg.list
cat darkframes.list | grep "180s" > dark_180s_-25deg.list
cat darkframes.list | grep "240s" > dark_240s_-25deg.list


# Set the parameters of darkcombine:
epar darkcombine

# You are now in an interactive interface showing a list of all the parameters for the task "darkcombine". You can navigate using the arrow keys, type and press enter to edit a parameter value (error messages are shown at the bottom). Set the parameter list to the following:

PACKAGE = ccdred
   TASK = darkcombine

input = ) List of dark images to combine
(output = ) Output dark image root name
(combine= median) Type of combine operation
(reject = sigclip) Type of rejection
(ccdtype= dark) CCD image type to combine
(process= no) Process images before combining?
(delete = no) Delete input images after combining?
(clobber= no) Clobber existing output image?
(scale = exposure) Image scaling
(statsec= ) Image section for computing statistics
(nlow = 0) minmax: Number of low pixels to reject
(nhigh = 1) minmax: Number of high pixels to reject
(nkeep = 1) Minimum to keep (pos) or maximum to reject (neg)
(mclip = yes) Use median in sigma clipping algorithms?
(lsigma = 3.) Lower sigma clipping factor
(hsigma = 3.) Upper sigma clipping factor
(rdnoise= 0.) ccdclip: CCD readout noise (electrons)
(gain = 1.) ccdclip: CCD gain (electrons/DN)
(snoise = 0.) ccdclip: Sensitivity noise (fraction)
(pclip = -0.5) pclip: Percentile clipping parameter
(blank = 0.) Value if there are no pixels
(mode = ql)

# Exit the interface by typing ":q".

# Run darkcombine to combine dark frames of equal exposure times into master darks. Choose self-explanatory names for the resulting, combined frames.
# NOTE: If some lists have only a few dark frames (3 or less), use "average" instead of "median" for the combine parameter. (In this example, the longest exposure times have the least dark frames.)

darkcombine @dark_1s_-25deg.list output="master_dark_1s_-25deg" combine="median" &
darkcombine @dark_2s_-25deg.list output="master_dark_2s_-25deg" combine="median" &
darkcombine @dark_3s_-25deg.list output="master_dark_3s_-25deg" combine="median" &
darkcombine @dark_5s_-25deg.list output="master_dark_5s_-25deg" combine="median" &
darkcombine @dark_10s_-25deg.list output="master_dark_10s_-25deg" combine="median" &
darkcombine @dark_20s_-25deg.list output="master_dark_20s_-25deg" combine="median" &
darkcombine @dark_60s_-25deg.list output="master_dark_60s_-25deg" combine="median" &
darkcombine @dark_120s_-25deg.list output="master_dark_120s_-25deg" combine="median" &
darkcombine @dark_180s_-25deg.list output="master_dark_180s_-25deg" combine="average" &
darkcombine @dark_240s_-25deg.list output="master_dark_240s_-25deg" combine="average" &

# If some individual frames are giving you trouble (error messages), you can exclude them from the lists.

#NOTE: the "&" parameter is only meant to run the task in the background. It is not mandatory.
}}}

Now jump on Section 4 to continue with the standard reduction sequence.


-------

'''2'. Separate subtraction: prepare the master bias frame'''

If you didn't, create a list containing the names of all the bias frames you have (as explained in Section 1). Then, combine the bias frames with the task "zerocombine".

{{{#!cplusplus
# Set the parameters of zerocombine:
epar zerocombine

# You are now in an interactive interface showing a list of all the parameters for the task "zerocombine". You can navigate using the arrow keys, type and press enter to edit a parameter value (error messages are shown at the bottom). Set the parameter list to the following:

PACKAGE = ccdred
   TASK = zerocombine

input = List of zero level images to combine
(output = ) Output zero level name
(combine= median) Type of combine operation
(reject = sigclip) Type of rejection
(ccdtype= zero) CCD image type to combine
(process= no) Process images before combining?
(delete = no) Delete input images after combining?
(clobber= no) Clobber existing output image?
(scale = none) Image scaling
(statsec= ) Image section for computing statistics
(nlow = 0) minmax: Number of low pixels to reject
(nhigh = 1) minmax: Number of high pixels to reject
(nkeep = 1) Minimum to keep (pos) or maximum to reject (neg)
(mclip = yes) Use median in sigma clipping algorithms?
(lsigma = 3.) Lower sigma clipping factor
(hsigma = 3.) Upper sigma clipping factor
(rdnoise= 0.) ccdclip: CCD readout noise (electrons)
(gain = 1.) ccdclip: CCD gain (electrons/DN)
(snoise = 0.) ccdclip: Sensitivity noise (fraction)
(pclip = -0.5) pclip: Percentile clipping parameter
(blank = 0.) Value if there are no pixels
(mode = ql)

# Exit the interface by typing ":q".

# Run zerocombine to combine your bias frames into a master bias. Give it a self-explanatory name as output.
# NOTE: If you have 3 bias frames or less, use "average" instead of "median" for the combine parameter. You can set it directly in the zerocombine parameters or specify it in the command line for one run (as examplified below).

zerocombine @biasframes.list output="master_bias" combine="average" &

# If some individual frames are giving you trouble (error messages), you can exclude them from the list.

#NOTE: the "&" parameter is only meant to run the task in the background. It is not mandatory.
}}}

Now you have your master bias, picturing the bias signal caused by the voltage of the CCD detector during the readout. Because it affects every image taken with the camera, we need to subtract it from absolutely every frame: darks, flats and scientific frames. In this example, we perform this task separately in different Sections for clarity.

The subtraction is performed by the task ''imarith'' (it does not need any parameter settings, you can use it as is).

{{{#!cplusplus

# Do subtraction. The syntax for imarith is: imarith [frame_1] [operation] [frame_2] [frame_result]. You can provide lists instead of individual frames.
# In this example, we subtract a whole list by a single frame, and we save the results in separate files by giving them a suffix. (This is done by providing a filelist plus a suffix as an output. You can also provide a new list you made containing custom filenames.)

imarith @darkframes.list - master_bias.fits bias_sub_@darkframes.list &

}}}



-------

'''3. Separate subtraction: prepare the master dark frames'''



First, create several dark frame lists (with bias-subtracted darks only) according to exposure time and camera temperature (after having checked what you had). Name those lists with self-explanatory names. Then, combine the dark frames with the task "darkcombine".

{{{#!cplusplus

# Create the dark lists. Here is a first manual way to do it.

files <insert filenames of bias-subtracted darks with 1s> > dark_1s_-25deg.list
files <insert filenames of bias-subtracted darks with 2s> > dark_2s_-25deg.list
files <insert filenames of bias-subtracted darks with 3s> > dark_3s_-25deg.list
files <insert filenames of bias-subtracted darks with 5s> > dark_5s_-25deg.list
files <insert filenames of bias-subtracted darks with 10s> > dark_10s_-25deg.list
files <insert filenames of bias-subtracted darks with 20s> > dark_20s_-25deg.list
files <insert filenames of bias-subtracted darks with 60s> > dark_60s_-25deg.list
files <insert filenames of bias-subtracted darks with 120s> > dark_120s_-25deg.list
files <insert filenames of bias-subtracted darks with 180s> > dark_180s_-25deg.list
files <insert filenames of bias-subtracted darks with 240s> > dark_240s_-25deg.list


# Another, more automatic way (assuming the filenames contain information about exposure time):

cat darkframes.list | grep "bias_sub_" | grep "001s" > dark_1s_-25deg.list
cat darkframes.list | grep "bias_sub_" | grep "002s" > dark_2s_-25deg.list
cat darkframes.list | grep "bias_sub_" | grep "003s" > dark_3s_-25deg.list
cat darkframes.list | grep "bias_sub_" | grep "005s" > dark_5s_-25deg.list
cat darkframes.list | grep "bias_sub_" | grep "020s" > dark_20s_-25deg.list
cat darkframes.list | grep "bias_sub_" | grep "060s" > dark_60s_-25deg.list
cat darkframes.list | grep "bias_sub_" | grep "120s" > dark_120s_-25deg.list
cat darkframes.list | grep "bias_sub_" | grep "180s" > dark_180s_-25deg.list
cat darkframes.list | grep "bias_sub_" | grep "240s" > dark_240s_-25deg.list


# Set the parameters of darkcombine to the following:
epar darkcombine

PACKAGE = ccdred
   TASK = darkcombine

input = ) List of dark images to combine
(output = ) Output dark image root name
(combine= median) Type of combine operation
(reject = sigclip) Type of rejection
(ccdtype= dark) CCD image type to combine
(process= no) Process images before combining?
(delete = no) Delete input images after combining?
(clobber= no) Clobber existing output image?
(scale = exposure) Image scaling
(statsec= ) Image section for computing statistics
(nlow = 0) minmax: Number of low pixels to reject
(nhigh = 1) minmax: Number of high pixels to reject
(nkeep = 1) Minimum to keep (pos) or maximum to reject (neg)
(mclip = yes) Use median in sigma clipping algorithms?
(lsigma = 3.) Lower sigma clipping factor
(hsigma = 3.) Upper sigma clipping factor
(rdnoise= 0.) ccdclip: CCD readout noise (electrons)
(gain = 1.) ccdclip: CCD gain (electrons/DN)
(snoise = 0.) ccdclip: Sensitivity noise (fraction)
(pclip = -0.5) pclip: Percentile clipping parameter
(blank = 0.) Value if there are no pixels
(mode = ql)


# Run darkcombine to combine dark frames of equal exposure times into master darks. Choose self-explanatory names for the resulting, combined frames. Again, use 'average' instead of 'median' if some lists have 3 exposures or less (which is generally the case for longer exposure times). In this example, we don't have enough frames for 3min and 4min exposure time:

darkcombine @dark_1s_-25deg.list output="master_dark_1s_-25deg" combine="median" &
darkcombine @dark_2s_-25deg.list output="master_dark_2s_-25deg" combine="median" &
darkcombine @dark_3s_-25deg.list output="master_dark_3s_-25deg" combine="median" &
darkcombine @dark_5s_-25deg.list output="master_dark_5s_-25deg" combine="median" &
darkcombine @dark_10s_-25deg.list output="master_dark_10s_-25deg" combine="median" &
darkcombine @dark_20s_-25deg.list output="master_dark_20s_-25deg" combine="median" &
darkcombine @dark_60s_-25deg.list output="master_dark_60s_-25deg" combine="median" &
darkcombine @dark_120s_-25deg.list output="master_dark_120s_-25deg" combine="median" &
darkcombine @dark_180s_-25deg.list output="master_dark_180s_-25deg" combine="average" &
darkcombine @dark_240s_-25deg.list output="master_dark_240s_-25deg" combine="average" &


}}}

You now have a master bias frame (containing only the bias noise) and several master dark frames (containing only dark current). You may now combine those frames to create new ones with custom exposure times using imarith. We explain here how to proceed. (Let's suppose we want to make a master dark frame with an exposure time of 1.4 seconds:


{{{#!cplusplus

# First, locate the closest master dark (according to exposure time) among your master dark set. It also needs to have a LONGER exposure time. In this case, our closest (longer) dark frame is 2s. We then use the imarith command to scale the dark current down to 1.4s:

imarith master_dark_2s_-25deg.fits * 0.7 master_dark_1s4_-25deg.fits &

# (to know the converson factor to apply, use your mind or your calculator.)

}}}

Proceed the same way to produce master dark frames with any exposure time you need.

The reason why you must '''always''' use a master dark frame with a longer exposure time to create a new, shorter one is the risk of '''noise amplification''': when you increase the dark current by multiplying its intensity a constant value, the standard deviation is increased the same, and you artifically produce a larger noise than it could be in reality. This is also true for the other way around: reducing the dark current intensity decreases its scatter and weakens the noise, which is less damaging since a lower noise will be hidden behind other, larger kinds of noise anyway. So NEVER mutiply the dark current by a >1 constant value!

If you have no dark frame with a longer exposure time that the one you need, then it is preferable to sum up different frames of shorter exposure times with imarith. In this example, let's suppose we need an exposure time of 6 minutes (300s); the best would be to add the 240s frame to the 60s frame:

{{{#!cplusplus
imarith master_dark_240s_-25deg.fits + master_dark_60s_-25deg.fits master_dark_300s_-25deg.fits &
}}}

-------

'''4. Prepare the master flat-fields'''

First, subtract the master dark frame having an exposure time similar to that of each flat-field frame (the exposure time is listed in the header, check it with imheader as before). Subtract the master bias frame too if you have one. The task used is imarith.

{{{#!cplusplus

# If you have separate masterdarks and masterbias (in case you didn't perform direct dark subtraction), subtract the masterbias from all your flats first. (Skip this step otherwise.)
imarith @flatframes.list - master_bias.fits bias_sub_@flatframes.list &

# Now prepare list of files as explained for the masterdarks. (If you have performed the masterbias subtraction at the previous step, these lists must contain subtracted flats only.)
files <insert filenames of flats in red filter> > flatred.list
files <insert filenames of flats in green filter> > flatgreen.list
files <insert filenames of flats in blue filter> > flatblue.list
files <insert filenames of flats in Halpha filter> > flathalpha.list
files <insert filenames of flats in luminance filter> > flatluminance.list

# Do the masterdark subtraction separately for each filter and each exposure time.
imarith @flatred.list - master_dark_1s_-25deg.fits dark_sub_@flatred.list &
imarith @flatblue.list - master_dark_2s5_-25deg.fits dark_sub_@flatblue.list &
imarith @flatluminance.list - master_dark_0s2_-25deg.fits dark_sub_@flatluminance.list &
imarith @flatgreen.list - master_dark_1s4_-25deg.fits dark_sub_@flatgreen.list &

}}}


Then, combine the dark-subtracted flat-fields ''separately for each filter''. We do not normalise the flat field by the mean value now (the flats are scaled by their mean value when used by ccdproc later).

{{{#!cplusplus

# Edit parameters of flatcombine as the following:
epar flatcombine

input = List of flat field images to combine
(output = ) Output flat field root name
(combine= average) Type of combine operation
(reject = sigclip) Type of rejection
(ccdtype= flat) CCD image type to combine
(process= no) Process images before combining?
(subsets= no) Combine images by subset parameter?
(delete = no) Delete input images after combining?
(clobber= no) Clobber existing output image?
(scale = mode) Image scaling
(statsec= ) Image section for computing statistics
(nlow = 1) minmax: Number of low pixels to reject
(nhigh = 1) minmax: Number of high pixels to reject
(nkeep = 1) Minimum to keep (pos) or maximum to reject (neg)
(mclip = yes) Use median in sigma clipping algorithms?
(lsigma = 3.) Lower sigma clipping factor
(hsigma = 3.) Upper sigma clipping factor
(rdnoise= 0.) ccdclip: CCD readout noise (electrons)
(gain = 1.) ccdclip: CCD gain (electrons/DN)
(snoise = 0.) ccdclip: Sensitivity noise (fraction)
(pclip = -0.5) pclip: Percentile clipping parameter
(blank = 1.) Value if there are no pixels
(mode = ql)

# Run flatcombine to produce the master flat:
flatcombine dark_sub_@flatred.list output=masterflat_red.fits &

# Here is an example of the output of flatcombine (in the "logfile" file created in the current directory):
May 8 16:57: IMCOMBINE
  combine = average, scale = mode, zero = none, weight = none
  reject = sigclip, mclip = yes, nkeep = 1
  lsigma = 3., hsigma = 3.
  blank = 1.
                Images Exp Mode Scale
  darksub_AutoFlat-PANoRot-Red-Bin1-001-NoRot.fits 3.7 16360. 1.035
  darksub_AutoFlat-PANoRot-Red-Bin1-002-NoRot.fits 4.1 17095. 0.990
  darksub_AutoFlat-PANoRot-Red-Bin1-003-NoRot.fits 4.4 17113. 0.989
  darksub_AutoFlat-PANoRot-Red-Bin1-004-NoRot.fits 4.7 17089. 0.991
  darksub_AutoFlat-PANoRot-Red-Bin1-005-NoRot.fits 5.0 17001. 0.996

  Output image = combineflat_red.fits, ncombine = 5

# Proceed similarly for flats in other filters.

flatcombine dark_sub_@flatgreen.list output=masterflat_green.fits &
flatcombine dark_sub_@flatblue.list output=masterflat_blue.fits &
flatcombine dark_sub_@flatluminance.list output=masterflat_luminance.fits &

}}}

----------

'''5. Process the raw data'''

  * '''Preparation'''
We now go to a directory with the raw data of an object. In that example we have 10*180 second exposures of M53, in the R, G, and B filters.

{{{#!cplusplus

# Go to the object directory
cd ../M53

# Display the files on the screen
ls

#Example of output:
M53_-003_G.fit M53_-006_B.fit M53_-008_R.fit
M53_-001_B.fit M53_-003_R.fit M53_-006_G.fit M53_-009_B.fit
M53_-001_G.fit M53_-004_B.fit M53_-006_R.fit M53_-009_G.fit
M53_-001_R.fit M53_-004_G.fit M53_-007_B.fit M53_-009_R.fit
M53_-002_B.fit M53_-004_R.fit M53_-007_G.fit M53_-010_B.fit
M53_-002_G.fit M53_-005_B.fit M53_-007_R.fit M53_-010_G.fit
M53_-002_R.fit M53_-005_G.fit M53_-008_B.fit M53_-010_R.fit
M53_-003_B.fit M53_-005_R.fit M53_-008_G.fit

# make list of files (all of them, and filter-separated):
files M53_-*.fit > M53-all.list
files M53_*R.fit > M53-R.list
files M53_*G.fit > M53-G.list
files M53_*B.fit > M53-B.list

# Set image type in IRAF-compliant name (object) for each frame.
hedit @M53-all.list imagetyp "object" update+ verify-

# Make a copy of your raw data, because some task (like ccdproc) will overwrite the input files. (Here, we give a "raw" prefix to all of them.)
imcopy @M53-all.list raw//@M53-all.list &

# Copy the necessary calibration files (master dark frame and master flats) into the current directory.
imcopy ../Calibration/master*.fits . &

# It is always a good idea to have a look at your files. You can open some of them to check them by eye. To do this, open another terminal and open DS9 by typing "ds9". Back to the IRAF terminal, type for instance:

display M53_-007_R.fit 1
display M53_-007_G.fit 2
display M53_-007_B.fit 3
}}}

-------

   * The next two steps, subtracting the master dark and dividing by the flat field, are done with the tasks imarith and ccdproc:

{{{#!cplusplus

# First, subtract the masterbias frame from all your scientific images. We named the output files as the input files, adding a "biassub_" prefix:
imarith @M53-all.list - master_bias.fits biassub_@M53-all.list &
# Now, subtract the masterdark frames from all your scientific images (in this example, all of them have an exposure time of 180s, so we can do it in a single command line). We named the output files as the input files, adding a "darksub_" prefix:
imarith biassub_@M53-all.list - master_dark_180s_-25deg.fits darksub_@M53-all.list &


# divide by flat field with ccdproc
# prepare the task; leave the 'flat' parameter free
epar ccdproc

PACKAGE = ccdred
   TASK = ccdproc
    
images = List of CCD images to correct
(output = ) List of output CCD images
(ccdtype= object) CCD image type to correct
(max_cac= 0) Maximum image caching memory (in Mbytes)
(noproc = no) List processing steps only?

(fixpix = no) Fix bad CCD lines and columns?
(oversca= no) Apply overscan strip correction?
(trim = no) Trim the image?
(zerocor= no) Apply zero level correction?
(darkcor= no) Apply dark count correction?
(flatcor= yes) Apply flat field correction?
(illumco= no) Apply illumination correction?
(fringec= no) Apply fringe correction?
(readcor= no) Convert zero level image to readout correction?
(scancor= no) Convert flat field image to scan correction?

(readaxi= line) Read out axis (column|line)
(fixfile= ) File describing the bad lines and columns
(biassec= ) Overscan strip image section
(trimsec= ) Trim data section
(zero = ) Zero level calibration image
(dark = ) Dark count calibration image
(flat = ) Flat field images
(illum = ) Illumination correction images
(fringe = ) Fringe correction images
(minrepl= 1.) Minimum flat field value
(scantyp= shortscan) Scan type (shortscan|longscan)
(nscan = 1) Number of short scan lines

(interac= no) Fit overscan interactively?
(functio= legendre) Fitting function
(order = 1) Number of polynomial terms or spline pieces
(sample = *) Sample points to fit
(naverag= 1) Number of sample points to combine
(niterat= 1) Number of rejection iterations
(low_rej= 3.) Low sigma rejection factor
(high_re= 3.) High sigma rejection factor
(grow = 0.) Rejection growing radius
(mode = ql)


# Flat-field division with ccdproc and correct parameters, for each filter :
ccdproc darksub_@M53-R.list flat="masterflat_red.fits" &
ccdproc darksub_@M53-G.list flat="masterflat_green.fits" &
ccdproc darksub_@M53-B.list flat="masterflat_blue.fits" &


# Check the header of one file:
imheader darksub_M53_-001_R.fit l+

# At the end of the header, you will notice that some keywords have been added by ccdproc (if you don't know anymore wether you reduced a frame or not, you can use them for checking):
FLATCOR = 'May 8 17:08 Flat field image is masterflat_red.fits with scale=1750'
CCDSEC = '[1:4096,1:4096]'
CCDPROC = 'May 8 17:09 CCD processing done'

}}}
----------

'''6. Align and stack the calibrated images'''

 To be correctly combined, the calibrated sky images must be aligned. To do so we can use well-detected stars in the images. We use the daofind task (package daophot) on a given image, chosen as the ''reference'' image, to identify them. See the Guide for Photometry ([[ForPraAnalysis/IRAFexamplePhot| Photometry using IRAF]]) for more details on the daophot package. Critical parameters are the sky background standard deviation, the desired detection threshold and the estimated size of the psf.


 First estimate the sky background: display an image in ds9 and run the "imexamine" command in IRAF. Now move the cursor to the ds9 display window: you will notice that the cursor changed its appearence. Place the cursor over an empty field (no star) and stroke the 'm' key. Statistics over a 5x5 pixel grid around the cursor will appear in the IRAF prompt. Here is an example:

{{{#!cplusplus
# SECTION NPIX MEAN MEDIAN STDDEV MIN MAX
[2073:2077,2922:2926] 25 1535. 1540. 24.39 1479. 1572.
# SECTION NPIX MEAN MEDIAN STDDEV MIN MAX
[3193:3197,1827:1831] 25 1537. 1546. 24.71 1493. 1580.
# SECTION NPIX MEAN MEDIAN STDDEV MIN MAX
[1493:1497,1415:1419] 25 1548. 1544. 23.35 1509. 1610.
}}}

So let us use a value of 24 for this example. Leave the imexamine mode by moving the cursor to the window and stroking 'q'. The IRAF window should be back to normal command line mode.

We now set necessary parameters for the star finding algorithm (you can find a thorough description of DAOPHOT and its main parameters in the [[https://wiki.mpe.mpg.de/cog/ForPraAnalysis/IRAFexamplePhot|Data Analysis section]]):

{{{#!cplusplus
# to load the necessary packages type in one after another:
digiphot
daophot

# Edit the main parameters of daofind (defined in the datapars and findpars parameter sets):
datapars

PACKAGE = daophot
   TASK = datapars

(scale = 1.) Image scale in units per pixel
(fwhmpsf= 6.) FWHM of the PSF in scale units
(emissio= yes) Features are positive ?
(sigma = 24.) Standard deviation of background in counts
(datamin= INDEF) Minimum good data value
(datamax= INDEF) Maximum good data value
(noise = poisson) Noise model
(ccdread= rdnoise) CCD readout noise image header keyword
(gain = egain) CCD gain image header keyword
(readnoi= 0.) CCD readout noise in electrons
(epadu = 1.) Gain in electrons per count
(exposur= exptime) Exposure time image header keyword
(airmass= airmass) Airmass image header keyword
(filter = filter) Filter image header keyword
(obstime= date-obs) Time of observation image header keyword
(itime = 1.) Exposure time
(xairmas= INDEF) Airmass
(ifilter= INDEF) Filter
(otime = INDEF) Time of observation
(mode = ql)
($nargs = 0)

findpars

PACKAGE = daophot
   TASK = findpars

(thresho= 100.) Threshold in sigma for feature detection
(nsigma = 1.5) Width of convolution kernel in sigma
(ratio = 1.) Ratio of minor to major axis of Gaussian kernel
(theta = 0.) Position angle of major axis of Gaussian kernel
(sharplo= 0.2) Lower bound on sharpness for feature detection
(sharphi= 1.) Upper bound on sharpness for feature detection
(roundlo= -1.) Lower bound on roundness for feature detection
(roundhi= 1.) Upper bound on roundness for feature detection
(mkdetec= no) Mark detections on the image display ?
(mode = ql)
($nargs = 0)

# Run daofind on your image (in this example, darksub_M53_-005_R.fit). Some user input is prompted, but all critical parameters have been set already. You can just press enter:
daofind darksub_M53_-005_R.fit default
FWHM of features in scale units (6.) (CR or value):
        New FWHM of features: 6. scale units 6. pixels
Standard deviation of background in counts (24.) (CR or value):
        New standard deviation of background: 24. counts
Detection threshold in sigma (100.) (CR or value):
        New detection threshold: 100. sigma 2400. counts
Minimum good data value (INDEF) (CR or value):
        New minimum good data value: INDEF counts
Maximum good data value (INDEF) (CR or value):
        New maximum good data value: INDEF counts

# A starlist with all the star detections has been created and named <original filename>.coo.1. Sort the star detections by brightness with the command:
psort darksub_M53_-005_R.fit.coo.1 MAG

# Now mark the stars on the image in ds9 by running the command:
tvmark 1 darksub_M53_-005_R.fit.coo.1 mark=circle radii=20 color=204

# GENERAL REMARK: The naming convention to avoid overwriting is that the .coo starlists are named with a number as a suffix. This number denotes the file version. If you create a new starlist later on, it will be named "darksub_M53_-005_R.fit.coo.2", and so on.
}}}

We now need to select well identifiable stars for alignment. The task '''pstselect''' will choose suitable stars automatically, e.g. the brightest neighbour in any region with a size set by ''psfrad'':

{{{#!cplusplus
# Set the daopars parameters:
daopars

PACKAGE = daophot
   TASK = daopars

(functio= gauss) Form of analytic component of psf model
(varorde= 2) Order of empirical component of psf model
(nclean = 5) Number of cleaning iterations for computing psf model
(saturat= no) Use wings of saturated stars in psf model computation
(matchra= 3.) Object matching radius in scale units
(psfrad = 25.) Radius of psf model in scale units
(fitrad = 10) Fitting radius in scale units
(recente= yes) Recenter stars during fit ?
(fitsky = no) Recompute group sky value during fit ?
(groupsk= yes) Use group rather than individual sky values ?
(sannulu= 0.) Inner radius of sky fitting annulus in scale units
(wsannul= 11.) Width of sky fitting annulus in scale units
(flaterr= 0.75) Flat field error in percent
(proferr= 5.) Profile error in percent
(maxiter= 100) Maximum number of fitting iterations
(clipexp= 6) Bad data clipping exponent
(clipran= 2.5) Bad data clipping range in sigma
(mergera= INDEF) Critical object merging radius in scale units
(critsnr= 1.) Critical S/N ratio for group membership
(maxnsta= 50000) Maximum number of stars to fit
(maxgrou= 100) Maximum number of stars to fit per group
(mode = ql)
($nargs = 0)

# Select alignment stars. They will be dropped in a new starlist (the 3rd argument). The syntax is
# pstselect [frame name] [input starlist] [output starlist] [number of objects in output list]
pstselect darksub_M53_-005_R.fit darksub_M53_-005_R.fit.coo.1 darksub_M53_-005_R_alignstar.coo 50

# Check the selected stars on the display:
display darksub_M53_-005_R.fit 1
tvmark 1 darksub_M53_-005_R.fit.coo.1 mark=circle radii=30 color=205
pdump darksub_M53_-005_R_alignstar.coo xcenter,ycenter,id yes | tvmark 1 STDIN mark=circle radii="50" col=204 label+
}}}

If the red-circle stars shown on the image are not acceptable (too faint, too few, not stars, spurious, too close from other stars) repeat the star detection with other parameters.


NOTE: you may select stars for alignment in a manual way. Have a look at the red-circled stars on the image and take note of the id number of those you prefer. Then in a new terminal, create a new "xxx_alignstar.coo" file and manually copy-paste them from the old starlist to the new starlist. (10 stars are sufficient to obtain a good alignment.) Here is how to proceed in a convenient way:

{{{#!cplusplus

# Outside the ds9 environment:

# Make a copy the original starlist without the stars:
cat darksub_M53_-005_R_alignstar.coo | grep '#' > darksub_M53_-005_R_alignstar2.coo

# Copy-paste the stars you picked up one by one into the new list using their index. In this example, indexes are 336,8,74. (Put five spaces after the index to be sure to select indexes and not other numbers.)

cat darksub_M53_-005_R_alignstar.coo | grep '336 ' >> sub_M13-010R60_alignstar2.coo
cat darksub_M53_-005_R_alignstar.coo | grep '8 ' >> sub_M13-010R60_alignstar2.coo
cat darksub_M53_-005_R_alignstar.coo | grep '74 ' >> sub_M13-010R60_alignstar2.coo

# Then check if everything went fine in the new starlist.
}}}


Once you are sure the selected stars ar OK, use their coordinates to align all other frames in the same filter to the reference image (here, we chose darksub_M53_-005_R.fit):

{{{#!cplusplus

# Select x and y colums of your favorite starlist (here, darksub_M53_-005_R_alignstar.coo) and create a new, reduced one:
pdump darksub_M53_-005_R_alignstar.coo xcenter,ycenter,id yes > darksub_M53_-005_R_alignstar.coo.1
# The resulting file will contain 3 columns: position (x,y) and index number of every star.

# Align all red frames to the reference image (darksub_M53_-005_R.fit) and name the resulting files with a "align" prefix:
imalign darksub_@M53-R.list darksub_M53_-005_R.fit darksub_M53_-005_R_alignstar.coo.1 align//@M53-R.list boxsize=15 bigbox=51 trimimages=no


# Exemple of output: Check the X- and Y-shift. They are small but would produce artefacts if ignored. N is the number of stars used to compute the shift. Almost all of them were used, meaning that the alignment stars were correctly selected. A very small number of used stars (e.g., 2 or 3) would have led to huge positional errors.

#Shifts Image X-shift Err Y-shift Err N Internal
darksub_M53_-001_R.fit -5.369 (0.003) 9.666 (0.004) 48 (0.009,0.013)
darksub_M53_-002_R.fit -4.648 (0.003) 6.147 (0.003) 48 (0.006,0.009)
darksub_M53_-003_R.fit -1.384 (0.003) 5.500 (0.004) 48 (0.005,0.009)
darksub_M53_-004_R.fit -0.120 (0.003) 2.605 (0.004) 48 (0.006,0.014)
darksub_M53_-005_R.fit 0.000 (0.003) 0.000 (0.004) 48 (0.000,0.000)
darksub_M53_-006_R.fit 2.095 (0.003) -1.438 (0.004) 48 (0.008,0.016)
darksub_M53_-007_R.fit 1.396 (0.003) -3.046 (0.003) 48 (0.008,0.019)
darksub_M53_-008_R.fit 3.854 (0.003) -3.082 (0.004) 48 (0.011,0.024)
darksub_M53_-009_R.fit 3.915 (0.003) -1.297 (0.004) 48 (0.013,0.026)
darksub_M53_-010_R.fit 7.158 (0.003) -2.472 (0.004) 48 (0.009,0.018)

# If you notice some frames are incorrectly aligned (they are visibly off compared to the others), you can retry the alignment for these frames with another starlist. If the image is glitched and definitely cannot be aligned, you can exclude it.

}}}

-----

The aligned, calibrated images taken with the red filters are finally combined using the task imcombine:

{{{#!cplusplus

# first create a list with the aligned frames:
files alignM53_-*R.fit > alignM53-R.list

# Set parameters of imcombine as the following:
epar imcombine

PACKAGE = immatch
   TASK = imcombine
    
input = List of images to combine
output = List of output images
(headers= ) List of header files (optional)
(bpmasks= ) List of bad pixel masks (optional)
(rejmask= ) List of rejection masks (optional)
(nrejmas= ) List of number rejected masks (optional)
(expmask= ) List of exposure masks (optional)
(sigmas = ) List of sigma images (optional)
(logfile= logfile) Log file

(combine= median) Type of combine operation
(reject = sigclip) Type of rejection
(project= no) Project highest dimension of input images?
(outtype= real) Output image pixel datatype
(outlimi= ) Output limits (x1 x2 y1 y2 ...)
(offsets= none) Input image offsets
(masktyp= none) Mask type
(maskval= 0) Mask value
(blank = 0.) Value if there are no pixels

(scale = exposure) Image scaling
(zero = none) Image zero point offset
(weight = none) Image weights
(statsec= ) Image section for computing statistics
(expname= exposure) Image header exposure time keyword

(lthresh= INDEF) Lower threshold
(hthresh= INDEF) Upper threshold
(nlow = 1) minmax: Number of low pixels to reject
(nhigh = 1) minmax: Number of high pixels to reject
(nkeep = 1) Minimum to keep (pos) or maximum to reject (neg)
(mclip = yes) Use median in sigma clipping algorithms?
(lsigma = 3.) Lower sigma clipping factor
(hsigma = 3.) Upper sigma clipping factor
(rdnoise= 0.) ccdclip: CCD readout noise (electrons)
(gain = 1.) ccdclip: CCD gain (electrons/DN)
(snoise = 0.) ccdclip: Sensitivity noise (fraction)
(sigscal= 0.1) Tolerance for sigma clipping scaling corrections
(pclip = -0.5) pclip: Percentile clipping parameter
(grow = 0.) Radius (pixels) for neighbor rejection
(mode = ql)

# Combine your scientific, aligned images:
imcombine @alignM53-R.list output=align_M53red &

# Make a backup copy of all your final images:
imcopy align_M53*fits %align_%final_%M53*fits &

}}}

Finally, repeat the last step for green and blue images.

NOTES:

 * You don't need to produce a new starlist for every filter: a fairly bright star seen in one optical filter is expected to be visible in the two other filters. So the starlist you create in one filter is valid for every filter.

 * You don't even need to pick a different reference frame for each filter. If the positional difference between two filters is not dramatic, the "imalign" command will find the reference stars in every filter and align the frames correctly.

So you can simply do

{{{#!cplusplus

imalign darksub_@M53-all.list darksub_M53_-005_R.fit darksub_M53_-005_R_alignstar.coo.1 align//@M53-G.list boxsize=15 bigbox=51 trimimages=no

}}}

And then combine the frames ''per filter'' to produce your final reduced frames.




-------



You can now critically examine the results. Repeat any necessary step. Compare the well calibrated, stacked images, to the raw data, and be proud!

Here are the files used in that example:

/!\ T.B.D.: Include intermediate images

{{attachment:image.png|raw 120 sec exposure with red filter|width=680}}

{{attachment:image.png|master flat taken with the red filter|width=680}}

{{attachment:image.png|calibrated image with R filter, combined 5*120 sec|width=680}}

{{attachment:rgb_finalM3.tif|Nice image of M53 with R, V, and B filter!|width=680}}
[[/TOPCATexample| On this page]], we show an example of the production of a colour-magnitude diagram using TOPCAT.

A quick guide to data analysis

Data reduction

Why the data need calibration

Images of an astronomical object taken with a CCD camera will include many unwanted signals of various origin. The goal of the calibration is to remove these effects, so that the pixel values of the calibrated images are an accurate representation of the sky light that fell on the camera during your observations.

This quick tutorial will work you through the standard reduction steps, with an example of analysis with IRAF.

Effects to correct

  • Bias When the CCD images are read, a small voltage is applied to all pixels. This electrical offset is called bias. It has to be subtracted, so that the zero points of the CCD output and the pixel-value scales coincide.

  • Dark Current The thermal agitation of silicon in CCD produce electrons, even when no light fall on the CCD (hence the name 'dark' current). Dark current is always present, i.e. also in your sky images. It scales linearly with exposure time, at a rate dependent on the CCD temperature, and needs to be subtracted

  • Non-uniformity (flat-fielding) The conversion of light to electrical signal on the camera varies with position, both on small scales (pixel-to-pixel), due to changing quantum efficiency of individual pixels, and on larger scales, because of vignetting and dust shadowing. To correct these effects, we need to divide the data by the normalised sensitivity of each pixel. This is estimated by observing a (theoretically) uniform source of light. The variations observed in this exposure, called a flat-field, are a measure of the non-uniformities of the camera. Note that the flat-field(s) need to be first corrected for the two effects described above.

  • Cosmic rays Cosmic rays (CR) produce a stream of high-energy particles that interact with the camera, leaving bright spots or tracks in the images. By combining multiple images using the average, or better, the median of the pixel values at each pixel, the extreme values produced by CR hits are easily spotted and removed. The image combinations are often done with a rejection algorithm to remove extreme variations. If an obvious CR track remains in a combined image, it is better to find out from which individual image it originate, and remove the CR prior to combining the images.

What do you need for data reduction

To calibrate your sky data, you will need a set of calibration frames. They should be taken close (in time) from your sky observations, ideally during the same night.

The minimum required is:

  1. A set of bias frames. These are zero second integration exposures (the shutter remains closed). In principle the bias is constant, but statistical fluctuations and interferences introduce some noise. It is best to combine several (e.g. 10) bias frames. The root-mean-square noise will decrease as the square root of the number of bias frames.

  1. A set of dark frames. These are long exposures taken with the shutter closed. You can either match the exposure time of the dark frames to the exposure time of you sky observations, or used a scalable dark frame, from which the bias has been subtracted. The second option is more flexible. Take a set of dark frames with increasing exposure times (from short to long). Here combining the dark frames will mostly help to remove CR hits (high-energy particles do not "see" the shutter...).

  2. A set of flat fields. These are images of a uniform source of light. Usually the twilight sky is the best choice. The exposure time should be set so that the pixel values is a good fraction (20%-50%) of the full well capacity. For the GOWI camera you should aim for ~20 000 counts. These good statistics allow to reveal the desired level of detail. An automatic sequence to produce twilight sky flat-fields is available. Note that because vignetting and CCD sensitivity are colour-dependent, flat-fields must be taken with the same filter as that used for the image to calibrate. As before, several exposures are taken to be combined.

In practice

Typical calibration sequence

1. Look at what you have:

  • Sit down and sort your data. If you have taken images of several objects, it is best to keep the calibration frames, which will be used for all objects, separated from the objects data. Make sure you know what files are bias frames, dark frames, flat-field (in what filter), etc... Ideally, should be clear if you used a proper naming convention for your files.

2. Prepare the master bias frames

  • Combine all the bias frames in one image (the master bias). You can either take the average or median of each pixel value. The first method reduces random noise more effectively, the second is better at excluding abnormal (extreme) values.

3. Prepare a master (scalable) dark frames

  • First, subtract the master bias from all your frames. Then, combine all dark frames. The resulting master dark frames represent the dark current obtain during <t>, the average time of the exposure times of all the dark frames that have been used. If you prefer not to subtract the bias separately and use a scalable dark, you can combine all dark frames with the same exposure time, without subtracting the bias.

Check (by looking at the data) that the master dark frame has no remaining CR. The averaging (in particular with median) should have removed all of them. If one CR feature remains, find which individual dark frames it comes from. Either remove the CR hits from that image, of exclude it from the master dark frame.

4. Prepare the master flat-fields

  • First, subtract the master dark frame, scaled by the exposure time ratio of the flat-fields to that of the scalable master dark, or the master dark having the same exposure time as the flat field, from all your flat-field frames. Then, combine the dark-subtracted flat-fields separately for each filter. At the end one obtains the master flat-field for each filter.

5. Process the raw data

  • a) Preparation
    • Again, have a first look at the raw data. What can of object do I have? What is the exposure time, the filters used? It is a good idea to have a backup copy of your raw data, and do the calibration in a separate directory than the one with the (processed) calibration frames. Copy the necessary calibration frames in the working directory of your choice.
    b) Subtract the master bias
    • This is done for all images and is neither exposure time nor filter-dependent.
    b) (bis) Subtract only the master dark
    • If you choose not to subtract the bias and dark separately, you can use a non-bias-subtracted master dark having the same exposure time as the science data. Then skip step c). and go directly to flat-field correction.

    c) Subtract the scaled master dark frame
    • Subtracted the master dark frame. The dark should be scaled to match the exposure time of each raw sky exposure.
    d) Divide by the corresponding flat-field
    • Divide the dark-subtracted images by the master flat-field taken with the same filter. The flat-field should be normalised by its mean, so that the mean value of the sky images remain unchanged. You can either produce normalised master flats (in step 4) or scale the flat-field by its mean when doing the division.
    e) Align and stack the calibrated images
    • It is common practice to split the total desired exposure time into several exposures. This allows a good rejection of CR, avoids sky-tracking uncertainties, which leave star trails if the telescope do not follow the sky rotation accurately enough, and can be used to avoid saturating the brightest stars in the field. All calibrated images of one object in one filter are then stack. If the pointing is changing (even slightly) from one exposure to another, it is necessary to align them first on a common grid, to avoid a "blur" in the combined image. This can be easily done by matching the position of several bright object in the field.
    f) Look and check
    • Be sure to check the final images and compare to raw data: Is there remaining CR? Are stars more blurry in the final image than in raw data? Are there remaining large-scale gradients (incorrect flat-fielding)? Redo the necessary step accordingly. The calibration is now done! You can go on with the analysis.

A working example with IRAF

On this page, we show an example of image calibration using IRAF.


Photometry

Photometry is the technique of measuring the brightness of astronomical objects. To do so, we quantify the signal from a given source collected on our camera during the exposure time. In other words, we only measure the instrumental flux of a source. To measure the intrinsic luminosity of an object, we need a flux calibration (how much signal is recorded for a given number of source's photons) and a knowledge of the distance and extinction towards the source (see e.g. description of the star cluster experiment).

Various methods are available to perform photometry.

  • The simplest is aperture photometry: the signal is integrated over a circular aperture defined around the star of interest. A second region, usually a concentric annulus around the circular aperture, is used to estimate the sky background. The background signal is scaled to account for the relative size of the star and background extraction region, before being subtracted from the source signal. A number of problems limits the use of aperture photometry for some purpose:

    • Choice of aperture: An aperture too small will ignore the fraction of the flux of a star that is in the wings of the PSF. An aperture too large will include noise from the sky background.
    • Crowding: It becomes increasingly difficult to define source and background regions in crowded fields, where stars are close to one another. In some cases (poor seeing, globular clusters...), aperture photometry might even be impossible, because stars are blended and cannot be separated anymore.

  • The way to go beyond the limits of aperture photometry is to performed the so-called PSF-fitting photometry(or PSF photometry for short). There, a model of the point-spread function (PSF) is determined for an image using some representative stars. This model is then fitted to all the stars detected in the image. It is then possible to know the contribution from an object and from the sky in a given region. Even if several stars are present in that region, their signal can be separated by fitting the PSF to each of them. There is also no concern for the size of an aperture, because all the signal under the PSF can be integrated. This method is of course more demanding than a simple aperture photometry. In particular, it requires a careful job while preparing the PSF model.

For the star cluster experiment the use of PSF photometry is strongly advised. This will increase the number of stars you can measure in globular clusters. The software used for PSF photometry is called DAOPHOT. It was developed by Peter Stetson (1987) and is a package of the IRAF environment.

In practice

Below we describe a typical analysis sequence with DAOPHOT. The goal is to use calibrated images to perform PSF photometry and obtain a list of magnitude for all stars in the images.

1. Make an initial star list with a source detection algorithm. This typically search for local density enhancements with peak amplitude greater than a given threshold above the local background.

2. Perform aperture photometry on the detected objects. This gives a rough estimate of the magnitude of the stars and helps in choosing "PSF stars" (see below).

3. Select "PSF stars", i.e. stars that will be use to build the PSF model. Having a good, realistic PSF model is the critical step to guarantee the success of the photometry. Therefore, selecting isolated stars with enough statistics is essential. There should be enough stars (between 5 and 30 for instance), distributed across the field of view to account for possible spatial variation of the PSF.

4. Compute the PSF model using the PSF stars. Various functional forms exist for the PSF model, with different number of parameters. In addition, the model can be constant across the image or vary with position.

5. Fit and subtract the current PSF model from the PSF stars. If the model is appropriate, the PSF stars should be cleanly subtracted from the image. If not, either adapt the list of PSF stars (e.g. add more stars, remove those who do not subtract out cleanly, etc...) or change the model used (e.g. a more complex model, or varying with position).

6. If the current PSF model is satisfactory, fit the model to all detected stars. Thus, stars with nearby neighbours can be separated and have accurate photometry.

7. (Optional) If needed, a second source detection can be ran on the fitted-star-subtracted image to search for faint stars not detected in the first run.

A working example with IRAF

On this page, we show an example of PSF photometry using DAOPHOT in IRAF.


Final analysis

Once photometry is done, the final step is to create scientific products (plots, light curves, spectra...) and use them to draw conclusions from the results. In the framework of the star cluster experiment, the scientific products to make are colour-manitude diagrams.

In practice

Here we describe the typical sequence to follow to produce a reliable colour-magnitude diagram with using the TOPCAT software.

1. First, you need to create a tabular list containing all your sources in every filter, with information such as position, magnitude and error.

2. Feed the software with these tables and perform a cross-matching of sources of different filters (different algorithms). It will produce a concatenated table with all the successful source pairs and all the available information.

3. The graphical interface of TOPCAT allows fast-ans-easy plotting of the tables. A colour-magnitude is a plot of the star magnitude in a given filter over a given colour index. Provide one the magnitude columns to the Y axis and the difference between the two magnitude columns for the X axis.

4. The software also allows easy data handling. You can define subsets by hand (literally, drawing it on the plot with the mouse) or using algebraic expressions. A good thing is to reject every source having a too bad chi^2 (because of improper PSF fitting) and/or a too large magnitude error. Optional but highly recommended.

5. you can directly apply any photometric correction you want to the tables without overwriting and play around with them TOPCAT. So you can apply corection for the galactic reddening and extinction.

A working example with TOPCAT

On this page, we show an example of the production of a colour-magnitude diagram using TOPCAT.

cogwiki: ForPraAnalysis (last edited 2022-10-08 16:40:38 by VadimBurwitz)