While the presence of any mass bends the path of light passing near it, this effect rarely produces the giant arcs and multiple images associated with strong gravitational lensing. Most lines of sight in the universe are thoroughly in the weak lensing regime, in which the deflection is impossible to detect in a single background source. However, even in these cases, the presence of the foreground mass can be detected, by way of a systematic alignment of background sources around the lensing mass. Weak gravitational lensing is thus an intrinsically statistical measurement, but it provides a way to measure the masses of astronomical objects without requiring assumptions about their composition or dynamical state.
Gravitational lensing acts as a coordinate transformation that distorts the images of background objects (usually galaxies) near a foreground mass. The transformation can be split into two terms, the convergence and shear. The convergence term magnifies the background objects by increasing their size, and the shear term stretches them tangentially around the foreground mass.
To measure this tangential alignment, it is necessary to measure the ellipticities of the background galaxies and construct a statistical estimate of their systematic alignment. The fundamental problem is that galaxies are not intrinsically circular, so their measured ellipticity is a combination of their intrinsic ellipticity and the gravitational lensing shear. Typically, the intrinsic ellipticity is much greater than the shear (by a factor of 3-300, depending on the foreground mass). The measurements of many background galaxies must be combined to average down this "shape noise". The orientation of intrinsic ellipticities of galaxies should be almost[1] entirely random, so any systematic alignment between multiple galaxies can generally be assumed to be caused by lensing.
Another major challenge for weak lensing is correction for the point spread function (PSF) due to instrumental and atmospheric effects, which causes the observed images to be smeared relative to the "true sky". This smearing tends to make small objects more round, destroying some of the information about their true ellipticity. As a further complication, the PSF typically adds a small level of ellipticity to objects in the image, which is not at all random, and can in fact mimic a true lensing signal. Even for the most modern telescopes, this effect is usually at least the same order of magnitude as the gravitational lensing shear, and is often much larger. Correcting for the PSF requires building for the telescope a model for how it varies across the field. Stars in our own galaxy provide a direct measurement of the PSF, and these can be used to construct such a model, usually by interpolating between the points where stars appear on the image. This model can then be used to reconstruct the "true" ellipticities from the smeared ones. Ground-based and space-based data typically undergo distinct reduction procedures due to the differences in instruments and observing conditions.
Angular diameter distances to the lenses and background sources are important for converting the lensing observables to physically meaningful quantities. These distances are often estimated using photometric redshifts when spectroscopic redshifts are unavailable. Redshift information is also important in separating the background source population from other galaxies in the foreground, or those associated with the mass responsible for the lensing. With no redshift information, the foreground and background populations can be split by an apparent magnitude or a color cut, but this is much less accurate.
Galaxy clusters are the largest gravitationally bound structures in the Universe with approximately 80% of cluster content in the form of dark matter.[2] The gravitational fields of these clusters deflect light-rays traveling near them. As seen from Earth, this effect can cause dramatic distortions of a background source object detectable by eye such as multiple images, arcs, and rings (cluster strong lensing). More generally, the effect causes small, but statistically coherent, distortions of background sources on the order of 10% (cluster weak lensing). Abell 1689, CL0024+17, and the Bullet Cluster are among the most prominent examples of lensing clusters.
The effects of cluster strong lensing were first detected by Roger Lynds of the National Optical Astronomy Observatories and Vahe Petrosian of Stanford University who discovered giant luminous arcs in a survey of galaxy clusters in the late 1970s. Lynds and Petrosian published their findings in 1986 without knowing the origin of the arcs.[3] In 1987, Genevieve Soucail of the Toulouse Observatory and her collaborators presented data of a blue ring-like structure in Abell 370 and proposed a gravitational lensing interpretation.[4] The first cluster weak lensing analysis was conducted in 1990 by J. Anthony Tyson of Bell Laboratories and collaborators. Tyson et al. detected a coherent alignment of the ellipticities of the faint blue galaxies behind both Abell 1689 and CL 1409+524.[5] Lensing has been used as a tool to investigate a tiny fraction of the thousands of known galaxy clusters.
Historically, lensing analyses were conducted on galaxy clusters detected via their baryon content (e.g. from optical or X-ray surveys). The sample of galaxy clusters studied with lensing was thus subject to various selection effects; for example, only the most luminous clusters were investigated. In 2006, David Wittman of the University of California at Davis and collaborators published the first sample of galaxy clusters detected via their lensing signals, completely independent of their baryon content.[6] Clusters discovered through lensing are subject to mass selection effects because the more massive clusters produce lensing signals with higher signal-to-noise ratio.
\kappa → \kappa\prime=λ\kappa+(1-λ)
Given a centroid for the cluster, which can be determined by using a reconstructed mass distribution or optical or X-ray data, a model can be fit to the shear profile as a function of clustrocentric radius. For example, the singular isothermal sphere (SIS) profile and the Navarro-Frenk-White (NFW) profile are two commonly used parametric models. Knowledge of the lensing cluster redshift and the redshift distribution of the background galaxies is also necessary for estimation of the mass and size from a model fit; these redshifts can be measured precisely using spectroscopy or estimated using photometry. Individual mass estimates from weak lensing can only be derived for the most massive clusters, and the accuracy of these mass estimates are limited by projections along the line of sight.[10]
Cluster mass estimates determined by lensing are valuable because the method requires no assumption about the dynamical state or star formation history of the cluster in question. Lensing mass maps can also potentially reveal "dark clusters," clusters containing overdense concentrations of dark matter but relatively insignificant amounts of baryonic matter. Comparison of the dark matter distribution mapped using lensing with the distribution of the baryons using optical and X-ray data reveals the interplay of the dark matter with the stellar and gas components. A notable example of such a joint analysis is the so-called Bullet Cluster.[11] The Bullet Cluster data provide constraints on models relating light, gas, and dark matter distributions such as Modified Newtonian dynamics (MOND) and Λ-Cold Dark Matter (Λ-CDM).
In principle, since the number density of clusters as a function of mass and redshift is sensitive to the underlying cosmology, cluster counts derived from large weak lensing surveys should be able to constrain cosmological parameters. In practice, however, projections along the line of sight cause many false positives.[12] Weak lensing can also be used to calibrate the mass-observable relation via a stacked weak lensing signal around an ensemble of clusters, although this relation is expected to have an intrinsic scatter.[13] In order for lensing clusters to be a precision probe of cosmology in the future, the projection effects and the scatter in the lensing mass-observable relation need to be thoroughly characterized and modeled.
Galaxy-galaxy lensing is a specific type of weak (and occasionally strong) gravitational lensing, in which the foreground object responsible for distorting the shapes of background galaxies is itself an individual field galaxy (as opposed to a galaxy cluster or the large-scale structure of the cosmos). Of the three typical mass regimes in weak lensing, galaxy-galaxy lensing produces a "mid-range" signal (shear correlations of ~1%) that is weaker than the signal due to cluster lensing, but stronger than the signal due to cosmic shear.
J.A. Tyson and collaborators first postulated the concept of galaxy-galaxy lensing in 1984, though the observational results of their study were inconclusive.[14] It was not until 1996 that evidence of such distortion was tentatively discovered,[15] with the first statistically significant results not published until the year 2000.[16] Since those initial discoveries, the construction of larger, high resolution telescopes and the advent of dedicated wide field galaxy surveys have greatly increased the observed number density of both background source and foreground lens galaxies, allowing for a much more robust statistical sample of galaxies, making the lensing signal much easier to detect. Today, measuring the shear signal due to galaxy-galaxy lensing is a widely used technique in observational astronomy and cosmology, often used in parallel with other measurements in determining physical characteristics of foreground galaxies.
Much like in cluster-scale weak lensing, detection of a galaxy-galaxy shear signal requires one to measure the shapes of background source galaxies, and then look for statistical shape correlations (specifically, source galaxy shapes should be aligned tangentially, relative to the lens center.) In principle, this signal could be measured around any individual foreground lens. In practice, however, due to the relatively low mass of field lenses and the inherent randomness in intrinsic shape of background sources (the "shape noise"), the signal is impossible to measure on a galaxy-by-galaxy basis. However, by combining the signals of many individual lens measurements together (a technique known as "stacking"), the signal-to-noise ratio will improve, allowing one to determine a statistically significant signal, averaged over the entire lens set.
Galaxy-galaxy lensing (like all other types of gravitational lensing) is used to measure several quantities pertaining to mass:
The gravitational lensing by large-scale structure also produces intrinsic alignment (IA) - an observable pattern of alignments in background galaxies.[22] [23] This distortion is only ~0.1%-1% - much more subtle than cluster or galaxy-galaxy lensing. The thin lens approximation usually used in cluster and galaxy lensing does not always work in this regime, because structures can be elongated along the line of sight. Instead, the distortion can be derived by assuming that the deflection angle is always small (see Gravitational Lensing Formalism). As in the thin lens case, the effect can be written as a mapping from the unlensed angular position
\vec{\beta}
\vec{\theta}
\Phi
\partial\betai | |
\partial\thetaj |
=\deltaij+
rinfty | |
\int | |
0 |
drg(r)
\partial2\Phi(\vec{x | |
(r))}{\partial |
xi \partialxj}
where
r
xi
g(r)=2r
rinfty | ||
\int | \left(1- | |
r |
r\prime | |
r |
\right)W(r\prime)
is the lensing kernel, which defines the efficiency of lensing for a distribution of sources
W(r)
As in the thin-lens approximation, the Jacobian can be decomposed into shear and convergence terms.
Because large-scale cosmological structures do not have a well-defined location, detecting cosmological gravitational lensing typically involves the computation of shear correlation functions, which measure the mean product of the shear at two points as a function of the distance between those points. Because there are two components of shear, three different correlation functions can be defined:
\xi++(\Delta\theta)=\langle\gamma+(\vec{\theta})\gamma+(\vec{\theta}+\vec{\Delta\theta})\rangle
\xi x x (\Delta\theta)=\langle\gamma x (\vec{\theta})\gamma x (\vec{\theta}+\vec{\Delta\theta})\rangle
\xi x (\Delta\theta)=\xi+(\Delta\theta)=\langle\gamma+(\vec{\theta})\gamma x (\vec{\theta}+\vec{\Delta\theta})\rangle
where
\gamma+~
\vec{\Delta\theta}
\gamma x
\xi x
The functions
\xi++~
\xi x x
Because they both depend on a single scalar density field,
\xi++~
\xi x x
The E-mode correlation function is also known as the aperture mass variance
\langle
2 | |
M | |
ap |
\rangle(\theta)=
2\theta | |
\int | |
0 |
\phid\phi | |
\theta2 |
\left[\xi++(\phi)+\xi x x (\phi)\right]
T | ||||
|
\right) =
2\theta | |
\int | |
0 |
\phid\phi | |
\theta2 |
\left[\xi++(\phi)-\xi x x (\phi)\right]
T | ||||
|
\right)
T+(x)=
infty | |
576\int | |
0 |
dt | |
t3 |
J0(xt)[J
2 | |
4(t)] |
T-(x)=
infty | |
576\int | |
0 |
dt | |
t3 |
J4(xt)[J
2 | |
4(t)] |
where
J0~
J4~
An exact decomposition thus requires knowledge of the shear correlation functions at zero separation, but an approximate decomposition is fairly insensitive to these values because the filters
T+~
T-~
\theta=0~
The ability of weak lensing to constrain the matter power spectrum makes it a potentially powerful probe of cosmological parameters, especially when combined with other observations such as the cosmic microwave background, supernovae, and galaxy surveys. Detecting the extremely faint cosmic shear signal requires averaging over many background galaxies, so surveys must be both deep and wide, and because these background galaxies are small, the image quality must be very good. Measuring the shear correlations at small scales also requires a high density of background objects (again requiring deep, high quality data), while measurements at large scales push for wider surveys.
While weak lensing of large-scale structure was discussed as early as 1967,[26] due to the challenges mentioned above, it was not detected until more than 30 years later when large CCD cameras enabled surveys of the necessary size and quality. In 2000, four independent groups[27] [28] [29] [30] published the first detections of cosmic shear, and subsequent observations have started to put constraints on cosmological parameters (particularly the dark matter density \Omegam~ \sigma8~
For current and future surveys, one goal is to use the redshifts of the background galaxies (often approximated using photometric redshifts) to divide the survey into multiple redshift bins. The low-redshift bins will only be lensed by structures very near to us, while the high-redshift bins will be lensed by structures over a wide range of redshift. This technique, dubbed "cosmic tomography", makes it possible to map out the 3D distribution of mass. Because the third dimension involves not only distance but cosmic time, tomographic weak lensing is sensitive not only to the matter power spectrum today, but also to its evolution over the history of the universe, and the expansion history of the universe during that time. This is a much more valuable cosmological probe, and many proposed experiments to measure the properties of dark energy and dark matter have focused on weak lensing, such as the Dark Energy Survey, Pan-STARRS, and the Legacy Survey of Space and Time (LSST) to be conducted by the Vera C. Rubin Observatory.
Weak lensing also has an important effect on the Cosmic Microwave Background and diffuse 21cm line radiation. Even though there are no distinct resolved sources, perturbations on the origining surface are sheared in a similar way to galaxy weak lensing, resulting in changes to the power spectrum and statistics of the observed signal. Since the source plane for the CMB and high-redshift diffuse 21 cm are at higher redshift than resolved galaxies, the lensing effect probes cosmology at higher redshifts than galaxy lensing.
Minimal coupling of general relativity with scalar fields allows solutions like traversable wormholes stabilized by exotic matter of negative energy density. Moreover, Modified Newtonian Dynamics as well as some bimetric theories of gravity consider invisible negative mass in cosmology as an alternative interpretation to dark matter, which classically has a positive mass.[31] [32] [33] [34] [35]
As the presence of exotic matter would bend spacetime and light differently than positive mass, a Japanese team at the Hirosaki University proposed to use "negative" weak gravitational lensing related to such negative mass.[36] [37] [38]
Instead of running statistical analysis on the distortion of galaxies based on the assumption of a positive weak lensing that usually reveals locations of positive mass "dark clusters", these researchers propose to locate "negative mass clumps" using negative weak lensing, i.e. where the deformation of galaxies is interpreted as being due to a diverging lensing effect producing radial distortions (similar to a concave lens instead of the classical azimuthal distortions of convex lenses similar to the image produced by a fisheye). Such negative mass clumps would be located elsewhere than assumed dark clusters, as they would reside at the center of observed cosmic voids located between galaxy filaments within the lacunar, web-like large-scale structure of the universe. Such test based on negative weak lensing could help to falsify cosmological models proposing exotic matter of negative mass as an alternative interpretation to dark matter.[39]