#acl VadimBurwitz:read,write,delete,revert,admin SherrySuyu:read,write,delete,revert AlejandraMelo:read,write,delete,revert All:read = A quick guide to data analysis = == Data reduction == === Why the data need calibration === Images of an astronomical object taken with a CCD camera will include many unwanted signals of various origin. The goal of the calibration is to remove these effects, so that the pixel values of the calibrated images are an accurate representation of the sky light that fell on the camera during your observations. This quick tutorial will work you through the standard reduction steps, with an example of analysis with IRAF. === Effects to correct === * '''''Bias ''''' When the CCD images are read, a small voltage is applied to all pixels. This electrical offset is called ''bias''. It has to be '''subtracted''', so that the zero points of the CCD output and the pixel-value scales coincide. * '''''Dark Current''''' The thermal agitation of silicon in CCD produce electrons, even when no light fall on the CCD (hence the name 'dark' current). Dark current is always present, i.e. also in your sky images. It scales linearly with exposure time, at a rate dependent on the CCD temperature, and needs to be '''subtracted''' * '''''Non-uniformity (flat-fielding)''''' The conversion of light to electrical signal on the camera varies with position, both on small scales (pixel-to-pixel), due to changing quantum efficiency of individual pixels, and on larger scales, because of vignetting and dust shadowing. To correct these effects, we need to '''divide''' the data by the normalised sensitivity of each pixel. This is estimated by observing a (theoretically) uniform source of light. The variations observed in this exposure, called a '''flat-field''', are a measure of the non-uniformities of the camera. Note that the flat-field(s) need to be ''first'' corrected for the two effects described above. * '''''Cosmic rays''''' Cosmic rays (CR) produce a stream of high-energy particles that interact with the camera, leaving bright spots or tracks in the images. By combining multiple images using the average, or better, the ''median'' of the pixel values at each pixel, the extreme values produced by CR hits are easily spotted and removed. The image combinations are often done with a rejection algorithm to remove extreme variations. If an obvious CR track remains in a combined image, it is better to find out from which individual image it originate, and remove the CR prior to combining the images. === What do you need for data reduction === To calibrate your sky data, you will need a set of ''calibration frames''. They should be taken close (in time) from your sky observations, ideally during the same night. The minimum required is: i. A set of '''bias frames'''. These are zero second integration exposures (the shutter remains closed). In principle the bias is constant, but statistical fluctuations and interferences introduce some noise. It is best to combine several (e.g. 10) bias frames. The root-mean-square noise will decrease as the square root of the number of bias frames. i. A set of '''dark frames'''. These are long exposures taken with the shutter closed. You can either match the exposure time of the dark frames to the exposure time of you sky observations, or used a ''scalable'' dark frame, from which the bias has been subtracted. The second option is more flexible. Take a set of dark frames with increasing exposure times (from short to long). Here combining the dark frames will mostly help to remove CR hits (high-energy particles do not "see" the shutter...). i. A set of '''flat fields'''. These are images of a uniform source of light. Usually the twilight sky is the best choice. The exposure time should be set so that the pixel values is a good fraction (20%-50%) of the full well capacity. For the GOWI camera you should aim for ~20 000 counts. These good statistics allow to reveal the desired level of detail. An automatic sequence to produce twilight sky flat-fields is available. Note that because vignetting and CCD sensitivity are colour-dependent, flat-fields ''must'' be taken with the same filter as that used for the image to calibrate. As before, several exposures are taken to be combined. === In practice === ==== Typical calibration sequence ==== 1. Look at what you have: Sit down and sort your data. If you have taken images of several objects, it is best to keep the calibration frames, which will be used for all objects, separated from the objects data. Make sure you know what files are bias frames, dark frames, flat-field (in what filter), etc... Ideally, should be clear if you used a proper naming convention for your files. 2. Prepare the master bias frames Combine all the bias frames in one image (the master bias). You can either take the average or median of each pixel value. The first method reduces random noise more effectively, the second is better at excluding abnormal (extreme) values. 3. Prepare a master (scalable) dark frames First, subtract the master bias from all your frames. Then, combine all dark frames. The resulting master dark frames represent the dark current obtain during , the average time of the exposure times of all the dark frames that have been used. If you prefer not to subtract the bias separately and use a scalable dark, you can combine all dark frames with the same exposure time, without subtracting the bias. Check (by ''looking'' at the data) that the master dark frame has no remaining CR. The averaging (in particular with median) should have removed all of them. If one CR feature remains, find which individual dark frames it comes from. Either remove the CR hits from that image, of exclude it from the master dark frame. 4. Prepare the master flat-fields First, subtract the master dark frame, scaled by the exposure time ratio of the flat-fields to that of the scalable master dark, or the master dark having the same exposure time as the flat field, from all your flat-field frames. Then, combine the dark-subtracted flat-fields ''separately for each filter''. At the end one obtains the master flat-field for each filter. 5. Process the raw data a) Preparation Again, have a first look at the raw data. What can of object do I have? What is the exposure time, the filters used? It is a good idea to have a backup copy of your raw data, and do the calibration in a separate directory than the one with the (processed) calibration frames. Copy the necessary calibration frames in the working directory of your choice. b) Subtract the master bias This is done for all images and is neither exposure time nor filter-dependent. b) (bis) Subtract only the master dark If you choose not to subtract the bias and dark separately, you can use a non-bias-subtracted master dark ''having the same exposure time'' as the science data. Then skip step c). and go directly to flat-field correction. c) Subtract the scaled master dark frame Subtracted the master dark frame. The dark should be scaled to match the exposure time of each raw sky exposure. d) Divide by the corresponding flat-field Divide the dark-subtracted images by the master flat-field taken with the same filter. The flat-field should be normalised by its mean, so that the mean value of the sky images remain unchanged. You can either produce normalised master flats (in step 4) or scale the flat-field by its mean when doing the division. e) Align and stack the calibrated images It is common practice to split the total desired exposure time into several exposures. This allows a good rejection of CR, avoids sky-tracking uncertainties, which leave star trails if the telescope do not follow the sky rotation accurately enough, and can be used to avoid saturating the brightest stars in the field. All calibrated images of one object in one filter are then stack. If the pointing is changing (even slightly) from one exposure to another, it is necessary to align them first on a common grid, to avoid a "blur" in the combined image. This can be easily done by matching the position of several bright object in the field. f) Look and check Be sure to check the final images and compare to raw data: Is there remaining CR? Are stars more blurry in the final image than in raw data? Are there remaining large-scale gradients (incorrect flat-fielding)? Redo the necessary step accordingly. The calibration is now done! You can go on with the '''analysis'''. ==== A working example with IRAF ==== [[/IRAFexample| On this page]], we show an example of image calibration using IRAF. (/!\ To the students selected for the closed beta: the new workflow for full reduction and analysis is available [[https://wiki.mpe.mpg.de/cog/Workbench|here]].) -------- == Photometry == Photometry is the technique of measuring the brightness of astronomical objects. To do so, we quantify the signal from a given source collected on our camera during the exposure time. In other words, we only measure the '''instrumental flux''' of a source. To measure the intrinsic luminosity of an object, we need a flux calibration (how much signal is recorded for a given number of source's photons) and a knowledge of the distance and extinction towards the source (see e.g. [[ForPraSternenhaufen| description of the star cluster experiment]]). Various methods are available to perform photometry. * The simplest is '''aperture photometry''': the signal is integrated over a circular aperture defined around the star of interest. A second region, usually a concentric annulus around the circular aperture, is used to estimate the sky background. The background signal is scaled to account for the relative size of the star and background extraction region, before being subtracted from the source signal. A number of problems limits the use of aperture photometry for some purpose: * Choice of aperture: An aperture too small will ignore the fraction of the flux of a star that is in the wings of the PSF. An aperture too large will include noise from the sky background. * Crowding: It becomes increasingly difficult to define source and background regions in ''crowded'' fields, where stars are close to one another. In some cases (poor seeing, globular clusters...), aperture photometry might even be impossible, because stars are blended and cannot be separated anymore. * The way to go beyond the limits of aperture photometry is to performed the so-called '''PSF-fitting photometry'''(or PSF photometry for short). There, a model of the point-spread function (PSF) is determined for an image using some representative stars. This model is then fitted to all the stars detected in the image. It is then possible to know the contribution from an object and from the sky in a given region. Even if several stars are present in that region, their signal can be separated by fitting the PSF to each of them. There is also no concern for the size of an aperture, because all the signal under the PSF can be integrated. This method is of course more demanding than a simple aperture photometry. In particular, it requires a careful job while preparing the PSF model. For the [[ForPraSternenhaufen|star cluster experiment]] the use of PSF photometry is strongly advised. This will increase the number of stars you can measure in globular clusters. The software used for PSF photometry is called '''DAOPHOT'''. It was developed by [[http://cdsads.u-strasbg.fr/abs/1987PASP...99..191S|Peter Stetson (1987)]] and is a package of the IRAF environment. === In practice === Below we describe a typical analysis sequence with DAOPHOT. The goal is to use calibrated images to perform PSF photometry and obtain a list of magnitude for all stars in the images. 1. Make an initial star list with a source detection algorithm. This typically search for local density enhancements with peak amplitude greater than a given threshold above the local background. 2. Perform aperture photometry on the detected objects. This gives a rough estimate of the magnitude of the stars and helps in choosing "PSF stars" (see below). 3. Select "PSF stars", i.e. stars that will be use to build the PSF model. '''Having a good, realistic PSF model is the critical step to guarantee the success of the photometry'''. Therefore, selecting isolated stars with enough statistics is essential. There should be enough stars (between 5 and 30 for instance), distributed across the field of view to account for possible spatial variation of the PSF. 4. Compute the PSF model using the PSF stars. Various functional forms exist for the PSF model, with different number of parameters. In addition, the model can be constant across the image or vary with position. 5. Fit and subtract the current PSF model from the PSF stars. If the model is appropriate, the PSF stars should be cleanly subtracted from the image. If not, either adapt the list of PSF stars (e.g. add more stars, remove those who do not subtract out cleanly, etc...) or change the model used (e.g. a more complex model, or varying with position). 6. If the current PSF model is satisfactory, fit the model to all detected stars. Thus, stars with nearby neighbours can be separated and have accurate photometry. 7. (Optional) If needed, a second source detection can be ran on the fitted-star-subtracted image to search for faint stars not detected in the first run. ==== A working example with IRAF ==== [[/IRAFexamplePhot| On this page]], we show an example of PSF photometry using DAOPHOT in IRAF. -------- == Final analysis == Once photometry is done, the final step is to create scientific products (plots, light curves, spectra...) and use them to draw conclusions from the results. In the framework of the [[ForPraSternenhaufen|star cluster experiment]], the scientific products to make are colour-manitude diagrams. === In practice === Here we describe the typical sequence to follow to produce a reliable colour-magnitude diagram with using the TOPCAT software. 1. First, you need to create a tabular list containing all your sources in every filter, with information such as position, magnitude and error. 2. Feed the software with these tables and perform a cross-matching of sources of different filters (different algorithms). It will produce a concatenated table with all the successful source pairs and all the available information. 3. The graphical interface of TOPCAT allows fast-ans-easy plotting of the tables. A colour-magnitude is a plot of the star magnitude in a given filter over a given colour index. Provide one the magnitude columns to the Y axis and the difference between the two magnitude columns for the X axis. 4. The software also allows easy data handling. You can define subsets by hand (literally, drawing it on the plot with the mouse) or using algebraic expressions. A good thing is to reject every source having a too bad chi^2 (because of improper PSF fitting) and/or a too large magnitude error. Optional but ''highly recommended''. 5. you can directly apply any photometric correction you want to the tables without overwriting and play around with them TOPCAT. So you can apply correction for the galactic reddening and extinction. ==== A working example with TOPCAT ==== [[/TOPCATexample| On this page]], we show an example of the production of a colour-magnitude diagram using TOPCAT.