Dynamic mode decomposition explained
Dynamic mode decomposition (DMD) is a dimensionality reduction algorithm developed by Peter J. Schmid and Joern Sesterhenn in 2008.[1] [2] Given a time series of data, DMD computes a set of modes each of which is associated with a fixed oscillation frequency and decay/growth rate. For linear systems in particular, these modes and frequencies are analogous to the normal modes of the system, but more generally, they are approximations of the modes and eigenvalues of the composition operator (also called the Koopman operator). Due to the intrinsic temporal behaviors associated with each mode, DMD differs from dimensionality reduction methods such as principal component analysis, which computes orthogonal modes that lack predetermined temporal behaviors. Because its modes are not orthogonal, DMD-based representations can be less parsimonious than those generated by PCA. However, they can also be more physically meaningful because each mode is associated with a damped (or driven) sinusoidal behavior in time.
Overview
Dynamic mode decomposition was first introduced by Schmid as a numerical procedure for extracting dynamical features from flow data.[3]
The data takes the form of a snapshot sequence
where
is the
-th snapshot of the flow field, and
is a data matrix whose columns are the individual snapshots. These snapshots are assumed to be related via a linear mapping that defines a
linear dynamical system
that remains approximately the same over the duration of the sampling period. Written in matrix form, this implies that
where
is the vector of residuals that accounts for behaviors that cannot be described completely by
,
eN-1=\{0,0,\ldots,1\}\inRN-1
,
, and
. Regardless of the approach, the output of DMD is the eigenvalues and eigenvectors of
, which are referred to as the
DMD eigenvalues and
DMD modes respectively.
Algorithm
There are two methods for obtaining these eigenvalues and modes. The first is Arnoldi-like, which is useful for theoretical analysis due to its connection with Krylov methods. The second is a singular value decomposition (SVD) based approach that is more robust to noise in the data and to numerical errors.
The Arnoldi approach
In fluids applications, the size of a snapshot,
, is assumed to be much larger than the number of snapshots
, so there are many equally valid choices of
. The original DMD algorithm picks
so that each of the snapshots in
can be expressed as linear combinations of the snapshots in
.Because most of the snapshots appear in both data sets, this representation is error free for all snapshots except
, which is written as
vN=a1v1+a2v2+...+aN-1vN-1+r=
a+r,
where
} is a set of coefficients DMD must identify and
is the residual.In total,
where
is the
companion matrixS=\begin{pmatrix}
0&0&...&0&a1\\
1&0&...&0&a2\\
0&1&...&0&a3\\
\vdots&\vdots&\ddots&\vdots&\vdots\\
0&0&...&1&aN-1\end{pmatrix}.
The vector
can be computed by solving a least squares problem, which minimizes the overall residual. In particular if we take the QR decomposition of
, then
.
In this form, DMD is a type of Arnoldi method, and therefore the eigenvalues of
are approximations of the eigenvalues of
. Furthermore, if
is an eigenvector of
, then
is an approximate eigenvector of
. The reason an
eigendecomposition is performed on
rather than
is because
is much smaller than
, so the computational cost of DMD is determined by the number of snapshots rather than the size of a snapshot.
The SVD-based approach
Instead of computing the companion matrix
, the SVD-based approach yields the matrix
that is related to
via a similarity transform. To do this, assume we have the SVD of
. Then
Equivalent to the assumption made by the Arnoldi-based approach, we choose
such that the snapshots in
can be written as the linear superposition of the columns in
, which is equivalent to requiring that they can be written as the superposition of
POD modes. With this restriction, minimizing the residual requires that it is orthogonal to the POD basis (i.e.,
). Then multiplying both sides of the equation above by
yields
, which can be manipulated to obtain
UTAU=
W\Sigma-1\equiv\tildeS.
Because
and
are related via similarity transform, the eigenvalues of
are the eigenvalues of
, and if
is an eigenvector of
, then
is an eigenvector of
.
In summary, the SVD-based approach is as follows:
- Split the time series of data in
into the two matrices
and
.
- Compute the SVD of
.
- Form the matrix
, and compute its eigenvalues
and eigenvectors
.
- The
-th DMD eigenvalue is
and
-th DMD mode is the
.
The advantage of the SVD-based approach over the Arnoldi-like approach is that noise in the data and numerical truncation issues can be compensated for by truncating the SVD of
. As noted in accurately computing more than the first couple modes and eigenvalues can be difficult on experimental data sets without this truncation step.
Theoretical and algorithmic advancements
Since its inception in 2010, a considerable amount of work has focused on understanding and improving DMD. One of the first analyses of DMD by Rowley et al.[4] established the connection between DMD and the Koopman operator, and helped to explain the output of DMD when applied to nonlinear systems. Since then, a number of modifications have been developed that either strengthen this connection further or enhance the robustness and applicability of the approach.
- Optimized DMD: Optimized DMD is a modification of the original DMD algorithm designed to compensate for two limitations of that approach: (i) the difficulty of DMD mode selection, and (ii) the sensitivity of DMD to noise or other errors in the last snapshot of the time series.[5] Optimized DMD recasts the DMD procedure as an optimization problem where the identified linear operator has a fixed rank. Furthermore, unlike DMD which perfectly reproduces all of the snapshots except for the last, Optimized DMD allows the reconstruction errors to be distributed throughout the data set, which appears to make the approach more robust in practice.
- Optimal Mode Decomposition: Optimal Mode Decomposition (OMD) recasts the DMD procedure as an optimization problem and allows the user to directly impose the rank of the identified system.[6] Provided this rank is chosen properly, OMD can produce linear models with smaller residual errors and more accurate eigenvalues on both synthetic and experimental data sets.
- Exact DMD: The Exact DMD algorithm generalizes the original DMD algorithm in two ways. First, in the original DMD algorithm the data must be a time series of snapshots, but Exact DMD accepts a data set of snapshot pairs.[7] The snapshots in the pair must be separated by a fixed
, but do not need to be drawn from a single time series. In particular, Exact DMD can allow data from multiple experiments to be aggregated into a single data set. Second, the original DMD algorithm effectively pre-processes the data by projecting onto a set of POD modes. The Exact DMD algorithm removes this pre-processing step, and can produce DMD modes that cannot be written as the superposition of POD modes.
- Sparsity Promoting DMD: Sparsity promoting DMD is a post processing procedure for DMD mode and eigenvalue selection.[8] Sparsity promoting DMD uses an
penalty to identify a smaller set of important DMD modes, and is an alternative approach to the DMD mode selection problem that can be solved efficiently using convex optimization techniques.
- Multi-Resolution DMD: Multi-Resolution DMD (mrDMD) is a combination of the techniques used in multiresolution analysis with Exact DMD designed to robust extracting DMD modes and eigenvalues from data sets containing multiple timescales.[9] The mrDMD approach was applied to global surface temperature data, and identifies a DMD mode that appears during El Nino years.
- Extended DMD: Extended DMD is a modification of Exact DMD that strengthens the connection between DMD and the Koopman operator.[10] As the name implies, Extended DMD is an extension of DMD that uses a richer set of observable functions to produce more accurate approximations of the Koopman operator. This extended set could be chosen a priori or learned from data.[11] [12] It also demonstrated the DMD and related methods produce approximations of the Koopman eigenfunctions in addition to the more commonly used eigenvalues and modes.
- Residual DMD: Residual DMD provides a means to control the projection errors of DMD and Extended DMD that arise from finite-dimensional approximations of the Koopman operator.[13] [14] The method utilizes the same snapshot data but introduces an additional finite matrix that captures infinite-dimensional residuals exactly in the large data limit. This enables users to sidestep spectral pollution (spurious modes), verify Koopman mode decompositions and learned dictionaries, and compute continuous spectra. Moreover, the method further bolsters the link between DMD and the Koopman operator by demonstrating how the spectral content of the latter can be computed with verification and error control.
- Physics-informed DMD: Physics-informed DMD forms a Procrustes problem that restricts the family of admissible models to a matrix manifold that respects the physical structure of the system.[15] This allows physical structures to be incorporated into DMD. This approach is less prone to overfitting, requires less training data, and is often less computationally expensive to build than standard DMD models.
- Measure-preserving EDMD: Measure-preserving extended DMD (mpEDMD) offers a Galerkin method whose eigendecomposition converges to the spectral quantities of the Koopman operators for general measure-preserving dynamical systems.[16] This method employs an orthogonal Procrustes problem (essentially a polar decomposition) to DMD and extended DMD. Beyond convergence, mpEDMD upholds physical conservation laws, and exhibits enhanced robustness to noise as well as improved long-term behavior.
- DMD with Control: Dynamic mode decomposition with control (DMDc) [17] is a modification of the DMD procedure designed for data obtained from input output systems. One unique feature of DMDc is the ability to disambiguate the effects of system actuation from the open loop dynamics, which is useful when data are obtained in the presence of actuation.
- Total Least Squares DMD: Total Least Squares DMD is a recent modification of Exact DMD meant to address issues of robustness to measurement noise in the data. In,[18] the authors interpret the Exact DMD as a regression problem that is solved using ordinary least squares (OLS), which assumes that the regressors are noise free. This assumption creates a bias in the DMD eigenvalues when it is applied to experimental data sets where all of the observations are noisy. Total least squares DMD replaces the OLS problem with a total least squares problem, which eliminates this bias.
- Dynamic Distribution Decomposition: DDD focuses on the forward problem in continuous time, i.e., the transfer operator. However the method developed can also be used for fitting DMD problems in continuous time.[19]
In addition to the algorithms listed here, similar application-specific techniques have been developed. For example, like DMD, Prony's method represents a signal as the superposition of damped sinusoids. In climate science, linear inverse modeling is also strongly connected with DMD.[20] For a more comprehensive list, see Tu et al.
Examples
Trailing edge of a profile
The wake of an obstacle in the flow may develop a Kármán vortex street. The Fig.1 shows the shedding of a vortex behind the trailing edge of a profile. The DMD-analysis was applied to 90 sequential Entropy fields (animated gif (1.9MB)) and yield an approximated eigenvalue-spectrum as depicted below. The analysis was applied to the numerical results, without referring to the governing equations. The profile is seen in white. The white arcs are the processor boundaries since the computation was performed on a parallel computer using different computational blocks.Roughly a third of the spectrum was highly damped (large, negative
) and is not shown. The dominant shedding mode is shown in the following pictures. The image to the left is the real part, the image to the right, the imaginary part of the eigenvector.
Again, the entropy-eigenvector is shown in this picture. The acoustic contents of the same mode is seen in the bottom half of the next plot. The top half corresponds to the entropy mode as above.
Synthetic example of a traveling pattern
The DMD analysis assumes a pattern of the form
q(x1,x2,x3,
\hatq(x2,x3,\ldots)
where
is any of the independent variables of the problem, but has to be selected in advance.Take for example the pattern
q(x,y,t)=e-i\hatq(x,t)
\Re\left\{ei\right\}+randomnoise
With the time as the preselected exponential factor.
A sample is given in the following figure with
,
and
. The left picture shows the pattern without, the right with noise added. The amplitude of the random noise is the same as that of the pattern.
A DMD analysis is performed with 21 synthetically generated fields using a time interval
, limiting the analysis to
.
The spectrum is symmetric and shows three almost undamped modes (small negative real part), whereas the other modes are heavily damped.Their numerical values are
\omega1=-0.201,\omega2/3=-0.223\pmi62.768
respectively. The real one corresponds to the mean of the field, whereas
corresponds to the imposed pattern with
. Yielding a relative error of −1/1000. Increasing the noise to 10 times the signal value yields about the same error. The real and imaginary part of one of the latter two eigenmodes is depicted in the following figure.
See also
Several other decompositions of experimental data exist. If the governing equations are available, an eigenvalue decomposition might be feasible.
References
- Schmid . Peter J . Sesterhenn . Joern . Dynamic mode decomposition of numerical and experimental data . Bulletin of the American Physical Society, Sixty-First Annual Meeting of the APS Division of Fluid Dynamics . 53 . 15 . 28 July 2008 . 1 August 2023.
- Schmid . Peter J . Journal of Fluid Mechanics Article contents Abstract References Dynamic mode decomposition of numerical and experimental data . Journal of Fluid Dynamics . 10 August 2010 . 656 . 5–28 . 10.1017/S0022112010001217 . 11334986 . 1 August 2023.
- P.J. Schmid. "Dynamic mode decomposition of numerical and experimental data." Journal of Fluid Mechanics 656.1 (2010): 5–28.
- C.W. Rowley, I Mezic, S. Bagheri, P. Schlatter, and D.S. Henningson, "Spectral analysis of nonlinear flows." Journal of Fluid Mechanics 641 (2009): 85-113
- K.K. Chen, J.H. Tu, and C.W. Rowley, "Variants of dynamic mode decomposition: boundary condition, Koopman, and Fourier analyses." Journal of Nonlinear Science 22 (2012): 887-915.
- A. Wynn, D. S. Pearson, B. Ganapathisubramani and P. J. Goulart, "Optimal mode decomposition for unsteady flows." Journal of Fluid Mechanics 733 (2013): 473-503
- On Dynamic Mode Decomposition: Theory and Applications. Tu, Rowley, Luchtenburg, Brunton, and Kutz. December 2014. American Institute of Mathematical Sciences. 1 . 2 . 391–421 . 10.3934/jcd.2014.1.391. 1312.0041. 46419148 .
- M.R. Jovanovic, P.J. Schmid, and J.W. Nichols, "Sparsity-promoting dynamic mode decomposition." Physics of Fluids 26 (2014)
- J.N. Kutz, X. Fu, and S.L. Brunton, "Multi-resolution dynamic mode decomposition." arXiv preprint arXiv:1506.00564 (2015).
- M.O. Williams, I.G. Kevrekidis, C.W. Rowley, "A Data–Driven Approximation of the Koopman Operator: Extending Dynamic Mode Decomposition." Journal of Nonlinear Science 25 (2015): 1307-1346.
- Li. Qianxiao. Felix. Dietrich. Erik M.. Bollt. Ioannis G. Kevrekidis. 2018. Extended dynamic mode decomposition with dictionary learning: A data-driven adaptive spectral decomposition of the Koopman operator. Chaos: An Interdisciplinary Journal of Nonlinear Science. 27. 10. 103111. 10.1063/1.4993854. 29092410 . 1707.00225. 41957686 .
- Gulina. Marvyn. Alexandre. Mauroy. 2021. Two methods to approximate the Koopman operator with a reservoir computer. Chaos: An Interdisciplinary Journal of Nonlinear Science. 31. 2. 023116. 10.1063/5.0026380. 33653036 . 2008.10263. 2021Chaos..31b3116G . 221266743 .
- Colbrook . Matthew J. . Townsend . Alex . 2023-07-27 . Rigorous data‐driven computation of spectral properties of Koopman operators for dynamical systems . Communications on Pure and Applied Mathematics . 77 . 221–283 . en . 10.1002/cpa.22125 . 0010-3640. free . 2111.14889 .
- Colbrook . Matthew J. . Ayton . Lorna J. . Szőke . Máté . 2023-01-17 . Residual dynamic mode decomposition: robust and verified Koopmanism . Journal of Fluid Mechanics . en . 955 . A21 . 10.1017/jfm.2022.1052 . 0022-1120. free . 2205.09779 .
- Baddoo . Peter J. . Herrmann . Benjamin . McKeon . Beverley J. . Nathan Kutz . J. . Brunton . Steven L. . 2023-03-01 . Physics-informed dynamic mode decomposition . Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences . en . 479 . 2271 . 10.1098/rspa.2022.0576 . 1364-5021. free .
- Colbrook . Matthew J. . 2023-06-30 . The mpEDMD Algorithm for Data-Driven Computations of Measure-Preserving Dynamical Systems . SIAM Journal on Numerical Analysis . en . 61 . 3 . 1585–1608 . 10.1137/22M1521407 . 0036-1429. 2209.02244 .
- J.L. Proctor, S.L. Brunton, and J.N. Kutz, "Dynamic mode decomposition with control." arXiv preprint arXiv:1409.6358 (2014).
- M.S. Hemati, C.W. Rowley, E.A. Deem, and L.N. Cattafesta, "De-Biasing the Dynamic Mode Decomposition for Applied Koopman Spectral Analysis of Noisy Datasets." arXiv preprint arXiv:1502.03854 (2015).
- Taylor-King. Jake P.. Riseth. Asbjørn N.. Macnair. Will. Claassen. Manfred. 2020-01-10. Dynamic distribution decomposition for single-cell snapshot time series identifies subpopulations and trajectories during iPSC reprogramming. PLOS Computational Biology. en. 16. 1. e1007491. 10.1371/journal.pcbi.1007491. 1553-7358. 6953770. 31923173. 2020PLSCB..16E7491T . free .
- Prediction of Niño 3 Sea Surface Temperatures Using Linear Inverse Modeling. Penland, Magorian. Cecile, Theresa. 1993. J. Climate. 6. 6. 1067. 10.1175/1520-0442(1993)006<1067:PONSST>2.0.CO;2. 1993JCli....6.1067P. free.
- Schmid, P. J. & Sesterhenn, J. L. 2008 Dynamic mode decomposition of numerical and experimental data. In Bull. Amer. Phys. Soc., 61st APS meeting, p. 208. San Antonio.
- Hasselmann, K., 1988. POPs and PIPs. The reduction of complex dynamical systems using principal oscillation and interaction patterns. J. Geophys. Res., 93(D9): 10975 - 10988.