Large Synoptic Survey Telescope Observing Strategy

Large Synoptic Survey Telescope Observing Strategy

A general-public overview of my current research project, where we use the LSST observing strategy to mitigate systematic errors that arise in measuring the weak gravitational lensing signal due to observational effects, such as the point-spread function. This work is currently being done with my PhD supervisor Prof Rachel Mandebaum at Carnegie Mellon University and within the LSST DESC.

LSST

The Large Synoptic Survey Telescope, or LSST, will be the biggest ground-based cosmology survey of the 2020s (the decadal survey) In short, it will be the highest-ever resolution camera, on an 8.4m telescope, that will take images of around 37 billion galaxies and stars in the southern night sky for 10 years, recording 15 terrabytes of data per night. You can read more about the LSST and its science goals here, but in short, the LSST was designed to advnace cosmology, including weak lensing science, clustering, large-scale structure, supernovae science, etc., as well as smaller-scale solar system science. Most of the scientists working on the cosmology side, including myself, are in the Dark Energy Science Collaboration, or DESC. The LSST will start taking images in Chile for science purposes in 2020, and anyone in the US will have access to LSST data (but not to DESC tools).

Weak Gravitationl Lensing

Light, just like anything else, is influenced by gravitational potentials. So when light travels by a gravitational mass, it gets deflected from its original path. Imagine you’re looking at the sky and there are 2 stars (or any light-emiting objects, like a galaxy) in a row, the the light from the farther-away object might be deflected as it passes the nearer object, getting redirected to you. This would allow you to see the farther-away object even though it is directly beyond the closer object. This is known as gravitational lensing. In the strong regime, the lensed object (the farther-away one) would look both significantly magnified and sheared (elliptical), sometimes making what is known as Einstein rings (pictured below). In the weak regime, (i.e. when the objects are not on the same line as the observer), they get magnified and sheared by a few percent, which is impossible to detect by eyes.

All galaxies we can observe are in fact weakly lensed, due to the large structure of the universe. We can only detect this lensing statistically, however, by observing a large number of galaxies.

On the one hand, weak lensing is one of the best ways to detect dark energy due to its sensitivity to some cosmological parameters, such as the density of matter in the universe, on the other hand, it is one of the most prone to systematics and observational and theoretical biases that are extremely difficult to correct for. This project presents a novel way to correct for them, by using the observing strategy of the LSST. One of those systematics, the point-spread function, is explained in the following section.

Einstein Ring. Credit: ESA/Hubble & NASA Einstin ring. Credit: ESA/Hubble & NASA

The Point-Spread Function

When a telescope observes a point source, it doesn’t see a point source. What it sees is the combined effects of diffraction, the turbulent atmosphere, optical aberrations, and sensor inefficiencies, convolved with the point source. These combined effects are known as the point-spread function, or the PSF.

It is noteworthy to say that a diffraction-limited telescope is the best-possible scenario; the optical aberrations are caused by imperfections in the system of lenses, and cause the same effects that a lot of people have in their eyes, such as defocus, astigmatism, and coma. These effects also make objects look more elliptical, which can be mistaken for weak lensing shear. Finally, the atmosphere causes effects similar to the twinke of the stars, which changes quickly, but fortunately averages down to a somewhat blobby shape given the exposure time of the LSST (currently proposed at 2x 15 second exposures).

It is important to model all these effects accurately and correct for them in images of galaxies. This correction, however, is not precise-enough to take full advantage of the incredible statistical power of the LSST, and rather leaves a bias in measurements of galaxy shapes and sizes. This project explores mitigating these effects using the observing strategy of the telescope.

LSST Operations Simulations

The LSST project and science collaborations use the LSST Operations Simulator to test and propose new observing strategies. OpSim combines science requirements, physical restrictions of the telescope and its environment to simulate realistic 10-year strategies. These simulations include things such as a list of positions where the telescope will point, along with durations, changes of filters, weather conditions, and observational quality; but they do not include any image simulations. The following video shows how these simulations look like.

One noteworthy thing that the LSST will do (and is special in the case of the LSST because previous surveys have not done this) is dither, or change the observation position by an amount that is on the order of the field of view. The reason this is obtainable is that the LSST will be able to image the sky 800 times, so an object may be imaged from 800 different positions inside the field of view. This, as we will see in the next section, will have a significant effect on mitigating weak lensing systematics. The following figure shows a schematic sketch of dithering:

Some OpSim runs include dithering that was designed to optimize uniformity while running the simulations, and some OpSim runs do not have dithering altogether; for the latter ones, we can choose to dither around the field positions that the telescope points repeatedly at (>800 times in 10 years). Some of those patterns are shown below, as implemented in the LSST Metrics Analysis Framework

These are then applied to field positions. Here’s a plot showing the centers of FOVs at every exposure with random dithering applied in orange, and without dithering in blue, the red-shaded area is the size of the FOV, which sees an object in white, contrasting the large number of different positions it would be observed from in the case of dithering.

Analysis & Results

To test the effect of PSF model errors on weak lensing we can model the systematics: First, let’s start by creating a number of objects (stars) in some area that the LSST will observe. The area plotted below was selected by making a lower limit on the magnitude (or faintness) that the telescope will be able to see, accumulated across all the exposures (‘co-added’). Of course this area will change depending on the assumptions that go into an OpSim run, but the one plotted here is a pretty standard one. The discontinuity is actually the milky way, which is very dusty and doesn’t provide good observing conditions, so the telescope often avoids it.

One minor thing to remind ourselves is that the sky is projected in the right-ascension and declination co-ordinates, which are not simple x-y co-ordinates, the difference is that areas near the bottom edge of declination are smaller (this is true for the upper edge too, but the LSST only works in the southern sky), which we would have to take into account (otherwise, a uniform distribution in RA and Dec will not be a true uniform distribution in equal area parts of the sky). We can do this by using the co-ordinate transformation:

You can notice that this correction was taken into account by the fainting of the plot near declination of degrees.

This effect is actually the same reason why areas near the poles appear larger than they truly area on maps of the Earth. You can read more about this here

PSF model errors

It was found in recent papers of the Dark Energy Survey and Hyper-Suprime Camera that PSF modeling errors are tiny at the center of the field of view, but become significant and take a radial orientation near the edges of the focal plane. We can model this with some code, but first, let me diverge a little to talk about shear, because when I say ‘radial’ in shear space, this means something different than saying radial in our normal space that spans 360 degrees and that we are familiar with.

Shear

I mentioned earlier that weak gravitational lensing distorts the shape of galaxies (or any object) that become lensed by an intermediate object. This distortion is called shear. Shear is a measure of ellipticity and is a spin-2 field. The most intuitive way to think about it is an arrow with a head. For example, if something is sheared in a horizontal direction, it makes no difference if it was stretched to the left or to the right, the effect is the same. This means that a rotation of 180 degrees makes no difference in shear space, instead of 360 degrees.

Now back to the implementation of the PSF modeling errors: If we center the field of view at a dither position, then find all the objects that are within the field of view, then we can select those that are in the 20% furthest away from the center, and give them radial orientations, and relatively high ellipticity values.

for star_pos in stars_positions:
  rel_X, rel_Y = star_pos[0] - FOV_center[0], star_pos[1] - FOV_center[1]
  r = np.sqrt(rel_X**2+rel_Y**2) # radial distance to each star
  r[r<0.0237]=0 # 0.0237 is 80% of the LSST FOV radius in radians.
  r[r>=0.0237]=0.08
  theta = np.arctan(rel_Y/rel_X)
  stare1 = r*np.cos(2*theta)
  stare2 = r*np.sin(2*theta)
  psfe1 = stare1/1.06 # assuming the PSF model overestimates the true star ellipticity by about 6% in each e-direction.
  psfe2 = stare2/1.06

which results in this:

Statistical tests of uniformity

The Kolmogorov-Smirnov test or KS test, can be used to check whether two distributions are similar or not, by calculating the maximum difference between the Cumulative Distribution Functions or CDFs, of the two distributions. One would expect the PSF modeling errors to average down and cancel out when the distribution of angles for a single object as seen by the centers of FOV of all the exposures is a uniform distribution. Therefore, we can use the KS test to compare this distribution of angles to a uniform distribution.

We can do this by executing:

from scipy import stats
ks_dstatistic = []
for angles in savedStarsAngles.values(): # savedStarsAngles is a dictionary with keys representing stars and values representing lists of angles described above
    ks_dstatistic.append(stats.kstest(angles, 'uniform', args=(-90,180)).statistic)
plt.hist(ks_dstatistic, bins=100, alpha=0.5, label=str(DitherPattern));

and the results for the KS test D statistic, for all stars, for the first year of simulated LSST operations, look like this: (the peak at 0.5 is for unlucky stars that were not observed at all in the first year)

The plot shows that the random dithering pattern is performing best, followed by spiral, then hexagonal. This is because the random pattern shows the smallest d-statistics, or in other words, it is the closest to being a uniform distribution between the three. (Interestingly, the hexagonal one starts looking more uniform as time passes, leaving spiral the worst for the 10-year simulation.)

Effect on cosmological measurements

Averaging across exposures

We can store in a dictionary with keys representing the positions of stars, all the values of that were calculated above, for each exposure. Then we can average those down. A simple average does not work because it is not a linear operation in ellipticity-space, however, it is a linear operation in the space of second moments of ellipticity. In this space, we get the second-moment matrix:

where TrM is the trace of M, a characteristic of the size of the star (*) of the PSF.

afterwards, we can do a simple arithmetic average on values of M across exposures for each object, and then we can return to ellipticity space:

where is a function of the full-width at half maximum of the star or the PSF, another characteristic of size.

Cosmic shear bias

To get a little technical, let’s define the cosmic shear: it’s the autocorrelation function of the shear for a set of objects like the one we’ve been studying. The cosmic shear provides information on dark energy, since it is effected by the rate of growth of structure in the universe. Since PSF modeling errors induce biases in the shear, then one would expect that they would induce biases in shear autocorrelations: in fact, they do, and PSF size errors, as well as shape errors, induce biases in this cosmic shear. A formalism to propagate these biases into errors on the cosmic shear was worked out in this paper and also this paper. Using them, we can estimate errors on the cosmic shear signal for each case:

The y-scale on the plot is symlog, with a threshold of , in other words, it is a log plot for anything with an absolute magnitude above and linear below, allowing us to see into negative values.

This plot shows the error on the cosmic shear signal, so the best case is when the curve is closest to zero. The results are then compatible with the KS test resuls above.

This same analysis can be repeated on any OpSim run. Newer OpSim runs are available here if you would like to try them out. Also, feel free to check my Jupyter notebook on GitHub if you would like to see fuller, and more optimized code.


© Husni Almoubayyed 2018. All rights reserved.