Integral field spectrographs (IFS) combine spectrographic and imaging capabilities in the optical or infrared wavelength domains (0.32 μm – 24 μm) to get from a single exposure spatially resolved spectra in a bi-dimensional region. The name originates from the fact that the measurements result from integrating the light on multiple sub-regions of the field. Developed at first for the study of astronomical objects, this technique is now also used in many other fields, such bio-medical science and Earth remote sensing. Integral field spectrography is part of the broader category of snapshot hyperspectral imaging techniques, itself a part of hyperspectral imaging.
With the notable exception of individual stars, most astronomical objects are spatially resolved by large telescopes. For spectroscopic studies, the optimum would then be to get a spectrum for each spatial pixel in the instrument field of view, getting full information on each target. This is loosely called a datacube from its two spatial and one spectral dimensions. Since both visible charge-coupled devices (CCD) and infrared detector arrays (staring arrays) used for astronomical instruments are bi-dimensional only, it is a non-trivial feat to develop spectrographic systems able to deliver 3D data cubes from the output of 2D detectors. Such instruments are usually christened 3D spectrographs in the astronomical field and hyperspectral imagers in the non-astronomical ones.
Hyperspectral imager can be broadly classified in two groups, scanning and non-scanning. The first contains the instruments that build the datacube by combining multiple exposures, scanning along a space axis, a wavelength axis or diagonally through it. Examples include push broom scanning systems, scanning Fabry-Perot and Fourier transform spectrometers. The second group includes the techniques that acquire the whole datacube in a single shot, snapshot imaging spectrometers. Integral field spectrography (IFS) techniques were the first snapshot hyperspectral imaging techniques to be developed. Since then, other snapshot hyperspectral imaging techniques, based for example on tomographic reconstruction[1] or compressed sensing using a coded aperture,[2] have been developed.[3]
One major advantage of the snapshot approach for ground-based telescopic observations is that it automatically provides homogenous data sets despite the unavoidable variability of Earth’s atmospheric transmission, spectral emission and image blurring during exposures. This is not the case for scanned systems for which the data cubes are built by a set of successive exposures. IFS, whether ground or space based, have also the huge advantage to detect much fainter objects in a given exposure than scanning systems, if at the cost of a much smaller sky field area.
After a slow start from the late 1980s on, Integral field spectroscopy has become a mainstream astrophysical tool in the optical to mid-infrared regions, addressing a whole gamut of astronomical sources, essentially any smallish individual object from Solar System asteroids to vastly distant galaxies.
Integral field spectrographs use so-called Integral Field Units (IFUs) to reformat the small square field of view into a more suitable shape, which is then spectrally dispersed by a grating spectrograph and recorded by a detector array. There are currently three different IFU flavors, using respectively a lenslet array, a fiber array or a mirror array.[3]
An enlarged sky image feeds a mini-lens array, typically a few thousands identical lenses each about 1 mm in diameter. The lenslet array output is a regular grid of as many small telescope mirror images, which serves as the input for a multi-slit spectrograph[4] that delivers the data cubes. This approach was advocated[5] in the early 1980s, with the first ever IFS observations[6] [7] in 1987 with the lenslet-based optical TIGER .
Pros are 100% on-sky spatial filling when using a square or hexagonal lenslet shape, high throughput, accurate photometry and an easy to build IFU. A significant con is the suboptimal use of precious detector pixels (~ 50% loss at least) in order to avoid contamination between adjacent spectra.
Instruments like the Spectrographic Areal Unit for Research on Optical Nebulae (SAURON)[8] on the William Herschel Telescope and the Spectro-Polarimetric High-Contrast Exoplanet Research (SPHERE) IFS[9] subsystem on European Southern Observatory (ESO)'s Very Large Telescope (VLT) use this technique.
The sky image given by the telescope falls on a fiber-based image slicer. It is typically made of a few thousands fibers each about 0.1 mm diameter, with the square or circular input field reformatted into a narrow rectangular (long-slit-like) output. The image slicer output is then coupled to a classical long-slit spectrograph that delivers the datacubes. A sky demonstrator successfully undertook the first Fiber based IFS observation in 1990. It was followed by the full-fledged SILFID[10] optical instrument some 5 years later. Coupling the circular fibers to a square or hexagonal lenslet array led to better light injection in the fiber and a nearly 100% filling factor of sky light.
Pros are 100% on-sky spatial filling, an efficient use of detector pixels and commercially available fiber-based image slicers. Cons are the sizable light loss in the fibers (~ 25%), their relatively poor photometric accuracy and their inability to work in a cryogenic environment. The latter limits wavelength coverage to less than 1.6 μm.
This technique is used by instruments in many telescopes (such as INTEGRAL[11] at the William Herschel Telescope), and particularly in currently ongoing large surveys of galaxies, such as the Calar Alto Legacy Integral Field Area Survey (CALIFA)[12] at the Calar Alto Observatory, the Sydney-AAO Multi-object Integral-field spectrograph (SAMI)[13] at the Australian Astronomical Observatory, and the Mapping Nearby Galaxies at APO (MaNGA)[14] which is one of the surveys making up the next phase of the Sloan Digital Sky Survey.
The sky image given by the telescope falls on a mirror-based slicer, typically made of approximately 30 rectangular mirrors, 0.1 to 0.2 mm wide, with the square input field reformatted into a narrow rectangular (long-slit-like) output. The slicer is then coupled to a classical long-slit spectrograph that delivers the data cubes. The first mirror-based slicer near-infrared IFS, the Spectrometer for Infrared Faint Field Imaging[15] (SPIFFI)[16] got its first science result[17] in 2003. The key mirror slicer system was quickly substantially improved under the Advanced Imaging Slicer[18] code name.
Pros are high throughput, 100% on-sky spatial filling, optimal use of detector pixels and the capability to work at cryogenic temperatures. On the other hand, it is difficult and expensive to manufacture and to align, especially when working in the optical domain given the more stringent optical surfaces specifications.
IFS are currently deployed in one flavor or another on many large ground-based telescopes, in the visible[19] [20] or near infrared[21] [22] domains, and on some space telescopes as well, in particular on the James Webb Space Telescope (JWST) in the near and middle infrared domains.[23] As the spatial resolution of telescopes in space (and also of ground-based telescopes through adaptive optics based air turbulence corrections) has much improved in recent decades, the need for IFS facilities has become more and more pressing. Spectral resolution is usually a few thousands and wavelength coverage about one octave (i.e. a factor 2 in wavelength). Note that each IFS requires a finely tuned software package to transform the raw counts data in physical units (light intensity versus wavelength on precise sky locations)
With each spatial pixel dispersed on say 4096 spectral pixels on a state of the art 4096 x 4096 pixel detector, IFS fields of view are severely limited, ~10 arc second across when fed by an 8–10 m class telescope. That in turn mainly limits IFS-based astrophysical science to single small targets. A much larger field of view, 1 arc minute across, or a sky area 36 times larger, is needed to cover hundreds of highly distant galaxies, in a single, if very long (up to 100 hours), exposure. This in turn requires to develop IFS systems featuring at least about half a billion detector pixels.
The brute force approach would have been to build huge spectrographs feeding gigantic detector arrays. Instead, the two Panoramic IFS in operation by 2022, Multi-unit spectroscopic explorer (MUSE) and Visible Integral-field Replicable Unit Spectrograph (VIRUS),[24] are made of respectively 24 and 120 serial-produced optical IFS. This results in substantially smaller and cheaper instruments. The mirror slicer based MUSE instrument started operation at the VLT in 2014 and the fiber sliced based VIRUS on the Hobby–Eberly Telescope in 2021.
It is conceptually straightforward to combine the capabilities of Integral Field Spectroscopy and Multi-Object Spectroscopy in a single instrument. This is done by deploying a number of small IFUs in a large sky patrol field, possibly a degree or more across. In that way, quite detailed information on, for example, a number of selected galaxies can be obtained in one go. There is of course a tradeoff between the spatial coverage on each target and the total number accessible of targets. The Fibre Large Array Multi Element Spectrograph (FLAMES),[25] the first instrument featuring this capability, had first light in this mode at the VLT in 2002. A number of such facilities are now in operation in the Visible[26] [27] [28] and the Near Infrared.[29] [30] Even larger latitude in the choice of coverage of the patrol field has been proposed under the name of Diverse Field Spectroscopy[31] (DFS) which would allow the observer to select arbitrary combinations of sky regions to maximize observing efficiency and scientific return. This requires technological developments, in particular versatile robotic target pickups[32] and photonic switchyards.[33]
Other techniques can achieve the same ends at different wavelengths. In particular, at radio wavelengths, simultaneous spectral information is obtained with heterodyne receivers,[34] featuring large frequency coverage and huge spectral resolution.
In the X-ray domain, owing to the high energy of individual photons, aptly called 3D photon counting detectors not only measure on the fly the 2D position of incoming photons but also their energy, hence their wavelength. Note nevertheless that spectral information is very coarse, with spectral resolutions ~10 only. One example is the Advanced CCD Imaging Spectrometer (ACIS) on NASA’s Chandra X-ray Observatory.
In the Visible-Near Infrared, this approach is a lot harder with the much less energetic photons. Nevertheless small format superconducting detectors, with limited spectral resolution ~ 30 and cooled below 0.1 K, have been developed and successfully used, such as for example the 32x32 pixels Array Camera for Optical to Near-infrared Spectrophotometry[35] (ARCONS) Camera at the Hale 200” Telescope. In contrast, ‘classical’ IFS usually feature spectral resolutions of a few thousands.