Differences between revisions 5 and 6
Revision 5 as of 2014-05-20 09:04:42
Size: 9135
Editor: PierreMaggi
Comment:
Revision 6 as of 2014-05-21 09:49:12
Size: 12140
Editor: PierreMaggi
Comment: PM 21/05/14 Added description of photometry and methods. Added link to the photometry example.
Deletions are marked like this. Additions are marked like this.
Line 5: Line 5:

PM 21/05/14 Added description of photometry and methods. Added link to the photometry example.
Line 84: Line 86:
/!\ T.B.D. Photometry is the technique of measuring the brightness of astronomical objects. To do so, we quantify the signal from a given source collected on our camera during the exposure time. In other words, we only measure the '''instrumental flux''' of a source. To measure the intrinsic luminosity of an object, we need a flux calibration (how much signal is recorded for a given number of source's photons) and a knowledge of the distance and extinction towards the source (see e.g. [[ForPraSternenhaufen| description of the star cluster experiment]]).

Various methods are available to perform photometry.
  * The simplest is '''aperture photometry''': the signal is integrated over a circular aperture defined around the star of interest. A second region, usually a concentric annulus around the circular aperture, is used to estimate the sky background. The background signal is scaled to account for the relative size of the star and background extraction region, before being subtracted from the source signal.
  A number of problems limits the use of aperture photometry for some purpose:
    * Choice of aperture: An aperture too small will ignore the fraction of the flux of a star that is in the wings of the PSF. An aperture too large will include noise from the sky background.
    * Crowding: It becomes increasingly difficult to define source and background regions in ''crowded'' fields, where stars are close to one another. In some cases (poor seeing, globular clusters...), aperture photometry might even be impossible, because stars are blended and cannot be separated anymore.

  * The way to go beyond the limits of aperture photometry is to performed the so-called '''PSF-fitting photometry'''(or PSF photometry for short). There, a model of the point-spread function (PSF) is determined for an image using some representative stars. This model is then fitted to all the stars detected in the image. It is then possible to know the contribution from an object and from the sky in a given region. Even if several stars are present in that region, their signal can be separated by fitting the PSF to each of them. There is also no concern for the size of an aperture, because all the signal under the PSF can be integrated. This method is of course more demanding than a simple aperture photometry. In particular, it requires a careful job while preparing the PSF model.

For the [[ForPraSternenhaufen|star cluster experiment]] the use of PSF photometry is strongly advised. This will increase the number of stars you can measure in globular clusters. The software used for PSF photometry is called '''DAOPHOT'''. It was developed by [[http://cdsads.u-strasbg.fr/abs/1987PASP...99..191S|Peter Stetson (1987)]] and is a package of the IRAF environment.

==== A working example with IRAF ====

[[/IRAFexamplePhot| On this page]], we show an example of PSF photometry using DAOPHOT in IRAF.

Change log: PM 11/04/14 This is a first to introduce the principles of CCD data reduction and analysis (i.e. photometry). Includes a rough tutorial to do basic reduction steps with IRAF. Feel free to add comments.

PM 20/05/14 Modified the data reduction sequence: Included "simple reduction" option, without a separated bias subtraction.

PM 21/05/14 Added description of photometry and methods. Added link to the photometry example.

A quick guide to data analysis

Data reduction

Why the data need calibration

Images of an astronomical object taken with a CCD camera will include many unwanted signals of various origin. The goal of the calibration is to remove these effects, so that the pixel values of the calibrated images are an accurate representation of the sky light that fell on the camera during your observations.

This quick tutorial will work you through the standard reduction steps, with an example of analysis with IRAF.

Effects to correct

  • Bias When the CCD images are read, a small voltage is applied to all pixels. This electrical offset is called bias. It has to be subtracted, so that the zero points of the CCD output and the pixel-value scales coincide.

  • Dark Current The thermal agitation of silicon in CCD produce electrons, even when no light fall on the CCD (hence the name 'dark' current). Dark current is always present, i.e. also in your sky images. It scales linearly with exposure time, at a rate dependent on the CCD temperature, and needs to be subtracted

  • Non-uniformity (flat-fielding) The conversion of light to electrical signal on the camera varies with position, both on small scales (pixel-to-pixel), due to changing quantum efficiency of individual pixels, and on larger scales, because of vignetting and dust shadowing. To correct these effects, we need to divide the data by the normalised sensitivity of each pixel. This is estimated by observing a (theoretically) uniform source of light. The variations observed in this exposure, called a flat-field, are a measure of the non-uniformities of the camera. Note that the flat-field(s) need to be first corrected for the two effects described above.

  • Cosmic rays Cosmic rays (CR) produce a stream of high-energy particles that interact with the camera, leaving bright spots or tracks in the images. By combining multiple images using the average, or better, the median of the pixel values at each pixel, the extreme values produced by CR hits are easily spotted and removed. The image combinations are often done with a rejection algorithm to remove extreme variations. If an obvious CR track remains in a combined image, it is better to find out from which individual image it originate, and remove the CR prior to combining the images.

What do you need for data reduction

To calibrate your sky data, you will need a set of calibration frames. They should be taken close (in time) from your sky observations, ideally during the same night. Standard calibration frame sequences are available via MaximDL or ACP. /!\ I need to check what is there!

The minimum required is:

  1. A set of bias frames. These are zero second integration exposures (the shutter remains closed). In principle the bias is constant, but statistical fluctuations and interferences introduce some noise. It is best to combine several (e.g. 10) bias frames. The root-mean-square noise will decrease as the square root of the number of bias frames.

  1. A set of dark frames. These are long exposures taken with the shutter closed. You can either match the exposure time of the dark frames to the exposure time of you sky observations, or used a scalable dark frame, from which the bias has been subtracted. The second option is more flexible. Take a set of dark frames with increasing exposure times (from short to long). Here combining the dark frames will mostly help to remove CR hits (high-energy particles do not "see" the shutter...).

  2. A set of flat fields. These are images of a uniform source of light. Usually the twilight sky is the best choice. The exposure time should be set so that the pixel values is a good fraction (20%-50%) of the full well capacity. For the GOWI camera you should aim for ~20 000 counts. These good statistics allow to reveal the desired level of detail. An automatic sequence to produce twilight sky flat-fields is available. Note that because vignetting and CCD sensitivity are colour-dependent, flat-fields must be taken with the same filter as that used for the image to calibrate. As before, several exposures are taken to be combined.

In practice

Typical calibration sequence

1. Look at what you have:

  • Sit down and sort your data. If you have taken images of several objects, it is best to keep the calibration frames, which will be used for all objects, separated from the objects data. Make sure you know what files are bias frames, dark frames, flat-field (in what filter), etc... Ideally, should be clear if you used a proper naming convention for your files.

2. Prepare the master bias frames

  • Combine all the bias frames in one image (the master bias). You can either take the average or median of each pixel value. The first method reduces random noise more effectively, the second is better at excluding abnormal (extreme) values.

3. Prepare a master (scalable) dark frames

  • First, subtract the master bias from all your frames. Then, combine all dark frames. The resulting master dark frames represent the dark current obtain during <t>, the average time of the exposure times of all the dark frames that have been used. If you prefer not to subtract the bias separately and use a scalable dark, you can combine all dark frames with the same exposure time, without subtracting the bias.

Check (by looking at the data) that the master dark frame has no remaining CR. The averaging (in particular with median) should have removed all of them. If one CR feature remains, find which individual dark frames it comes from. Either remove the CR hits from that image, of exclude it from the master dark frame.

4. Prepare the master flat-fields

  • First, subtract the master dark frame, scaled by the exposure time ratio of the flat-fields to that of the scalable master dark, or the master dark having the same exposure time as the flat field, from all your flat-field frames. Then, combine the dark-subtracted flat-fields separately for each filter. At the end one obtains the master flat-field for each filter.

5. Process the raw data

  • a) Preparation
    • Again, have a first look at the raw data. What can of object do I have? What is the exposure time, the filters used? It is a good idea to have a backup copy of your raw data, and do the calibration in a separate directory than the one with the (processed) calibration frames. Copy the necessary calibration frames in the working directory of your choice.
    b) Subtract the master bias
    • This is done for all images and is neither exposure time nor filter-dependent.
    b) (bis) Subtract only the master dark
    • If you choose not to subtract the bias and dark separately, you can use a non-bias-subtracted master dark having the same exposure time as the science data. Then skip step c). and go directly to flat-field correction.

    c) Subtract the scaled master dark frame
    • Subtracted the master dark frame. The dark should be scaled to match the exposure time of each raw sky exposure.
    d) Divide by the corresponding flat-field
    • Divide the dark-subtracted images by the master flat-field taken with the same filter. The flat-field should be normalise by its mean, so that the mean value of the sky images remain unchanged. You can either produce normalised master flats (in step 4) or scale the flat-field by its mean when doing the division.
    e) Align and stack the calibrated images
    • It is common practice to split the total desired exposure time into several exposures. This allows a good rejection of CR, avoids sky-tracking uncertainties, which leave star trails if the telescope do not follow the sky rotation accurately enough, and can be used to avoid saturating the brightest stars in the field. All calibrated images of one object in one filter are then stack. If the pointing is changing (even slightly) from one exposure to another, it is necessary to align them first on a common grid, to avoid a "blur" in the combined image. This can be easily done by matching the position of several bright object in the field.
    f) Look and check
    • Be sure to check the final images and compare to raw data: Is there remaining CR? Are stars more blurry in the final image than in raw data? Are there remaining large-scale gradients (incorrect flat-fielding)? Redo the necessary step accordingly. The calibration is now done! You can go on with the analysis.

A working example with IRAF

On this page, we show an example of image calibration using IRAF.

Photometry

Photometry is the technique of measuring the brightness of astronomical objects. To do so, we quantify the signal from a given source collected on our camera during the exposure time. In other words, we only measure the instrumental flux of a source. To measure the intrinsic luminosity of an object, we need a flux calibration (how much signal is recorded for a given number of source's photons) and a knowledge of the distance and extinction towards the source (see e.g. description of the star cluster experiment).

Various methods are available to perform photometry.

  • The simplest is aperture photometry: the signal is integrated over a circular aperture defined around the star of interest. A second region, usually a concentric annulus around the circular aperture, is used to estimate the sky background. The background signal is scaled to account for the relative size of the star and background extraction region, before being subtracted from the source signal. A number of problems limits the use of aperture photometry for some purpose:

    • Choice of aperture: An aperture too small will ignore the fraction of the flux of a star that is in the wings of the PSF. An aperture too large will include noise from the sky background.
    • Crowding: It becomes increasingly difficult to define source and background regions in crowded fields, where stars are close to one another. In some cases (poor seeing, globular clusters...), aperture photometry might even be impossible, because stars are blended and cannot be separated anymore.

  • The way to go beyond the limits of aperture photometry is to performed the so-called PSF-fitting photometry(or PSF photometry for short). There, a model of the point-spread function (PSF) is determined for an image using some representative stars. This model is then fitted to all the stars detected in the image. It is then possible to know the contribution from an object and from the sky in a given region. Even if several stars are present in that region, their signal can be separated by fitting the PSF to each of them. There is also no concern for the size of an aperture, because all the signal under the PSF can be integrated. This method is of course more demanding than a simple aperture photometry. In particular, it requires a careful job while preparing the PSF model.

For the star cluster experiment the use of PSF photometry is strongly advised. This will increase the number of stars you can measure in globular clusters. The software used for PSF photometry is called DAOPHOT. It was developed by Peter Stetson (1987) and is a package of the IRAF environment.

A working example with IRAF

On this page, we show an example of PSF photometry using DAOPHOT in IRAF.

cogwiki: ForPraAnalysis (last edited 2022-10-08 16:40:38 by VadimBurwitz)