Proper generalized decomposition explained

The proper generalized decomposition (PGD) is an iterative numerical method for solving boundary value problems (BVPs), that is, partial differential equations constrained by a set of boundary conditions, such as the Poisson's equation or the Laplace's equation.

The PGD algorithm computes an approximation of the solution of the BVP by successive enrichment. This means that, in each iteration, a new component (or mode) is computed and added to the approximation. In principle, the more modes obtained, the closer the approximation is to its theoretical solution. Unlike POD principal components, PGD modes are not necessarily orthogonal to each other.

By selecting only the most relevant PGD modes, a reduced order model of the solution is obtained. Because of this, PGD is considered a dimensionality reduction algorithm.

Description

The proper generalized decomposition is a method characterized by

  1. a variational formulation of the problem,
  2. a discretization of the domain in the style of the finite element method,
  3. the assumption that the solution can be approximated as a separate representation and
  4. a numerical greedy algorithm to find the solution.[1] [2]

Variational formulation

In the Proper Generalized Decomposition method, the variational formulation involves translating the problem into a format where the solution can be approximated by minimizing (or sometimes maximizing) a functional. A functional is a scalar quantity that depends on a function, which in this case, represents our problem.

The most commonly implemented variational formulation in PGD is the Bubnov-Galerkin method.[3] [4] This method is chosen for its ability to provide an approximate solution to complex problems, such as those described by partial differential equations (PDEs). In the Bubnov-Galerkin approach, the idea is to project the problem onto a space spanned by a finite number of basis functions. These basis functions are chosen to approximate the solution space of the problem.

In the Bubnov-Galerkin method, we seek an approximate solution that satisfies the integral form of the PDEs over the domain of the problem. This is different from directly solving the differential equations. By doing so, the method transforms the problem into finding the coefficients that best fit this integral equation in the chosen function space.

While the Bubnov-Galerkin method is prevalent, other variational formulations are also used in PGD,[5] depending on the specific requirements and characteristics of the problem, such as:

Domain discretization

The discretization of the domain is a well defined set of procedures that cover (a) the creation of finite element meshes, (b) the definition of basis function on reference elements (also called shape functions) and (c) the mapping of reference elements onto the elements of the mesh.

Separate representation

PGD assumes that the solution u of a (multidimensional) problem can be approximated as a separate representation of the form \mathbf \approx \mathbf^N(x_1, x_2, \ldots, x_d) = \sum_^N \mathbf_i(x_1) \cdot \mathbf_i(x_2) \cdots \mathbf_i(x_d), where the number of addends N and the functional products X1(x1), X2(x2), ..., Xd(xd), each depending on a variable (or variables), are unknown beforehand.

Greedy algorithm

The solution is sought by applying a greedy algorithm, usually the fixed point algorithm, to the weak formulation of the problem. For each iteration i of the algorithm, a mode of the solution is computed. Each mode consists of a set of numerical values of the functional products X1(x1), ..., Xd(xd), which enrich the approximation of the solution. Due to the greedy nature of the algorithm, the term 'enrich' is used rather than 'improve', since some modes may actually worsen the approach. The number of computed modes required to obtain an approximation of the solution below a certain error threshold depends on the stopping criterion of the iterative algorithm.

Features

PGD is suitable for solving high-dimensional problems, since it overcomes the limitations of classical approaches. In particular, PGD avoids the curse of dimensionality, as solving decoupled problems is computationally much less expensive than solving multidimensional problems.

Therefore, PGD enables to re-adapt parametric problems into a multidimensional framework by setting the parameters of the problem as extra coordinates: \mathbf \approx \mathbf^N(x_1, \ldots, x_d; k_1, \ldots, k_p) = \sum_^N \mathbf_i(x_1) \cdots \mathbf_i(x_d) \cdot \mathbf_i(k_1) \cdots \mathbf_i(k_p),where a series of functional products K1(k1), K2(k2), ..., Kp(kp), each depending on a parameter (or parameters), has been incorporated to the equation.

In this case, the obtained approximation of the solution is called computational vademecum: a general meta-model containing all the particular solutions for every possible value of the involved parameters.[7]

Sparse Subspace Learning

The Sparse Subspace Learning (SSL) method leverages the use of hierarchical collocation to approximate the numerical solution of parametric models. With respect to traditional projection-based reduced order modeling, the use of a collocation enables non-intrusive approach based on sparse adaptive sampling of the parametric space. This allows to recover the lowdimensional structure of the parametric solution subspace while also learning the functional dependency from the parameters in explicit form. A sparse low-rank approximate tensor representation of the parametric solution can be built through an incremental strategy that only needs to have access to the output of a deterministic solver. Non-intrusiveness makes this approach straightforwardly applicable to challenging problems characterized by nonlinearity or non affine weak forms.[8]

Notes and References

  1. Amine Ammar . Béchir Mokdad . Francisco Chinesta . Roland Keunings . 2006 . A New Family of Solvers for Some Classes of Multidimensional Partial Differential Equations Encountered in Kinetic Theory Modeling of Complex Fluids. Journal of Non-Newtonian Fluid Mechanics.
  2. Amine Ammar . Béchir Mokdad . Francisco Chinesta . Roland Keunings . 2007. A new family of solvers for some classes of multidimensional partial differential equations encountered in kinetic theory modelling of complex fluids. Part II: Transient simulation using space-time separated representations . Journal of Non-Newtonian Fluid Mechanics.
  3. Proper generalised decompositions: theory and applications . Cardiff University . 2015-04-09 . phd . en . Thomas Lloyd David . Croft.
  4. Book: Chinesta . Francisco . The Proper Generalized Decomposition for Advanced Numerical Simulations: A Primer . Keunings . Roland . Leygue . Adrien . 2014 . Springer International Publishing . 978-3-319-02864-4 . SpringerBriefs in Applied Sciences and Technology . en.
  5. Web site: Aguado . José Vicente . 18 Nov 2018 . Advanced strategies for the separated formulation of problems in the Proper Generalized Decomposition framework .
  6. Petrov-Galerkin Proper Generalized Decomposition strategies for convection-diffusion problems . Universitat Politècnica de Catalunya . 2020-06-22 . Master thesis . Rafel . Perelló i Ribas.
  7. Francisco Chinesta, Adrien Leygue, Felipe Bordeu, Elías Cueto, David Gonzalez, Amine Ammar, Antonio Huerta. 2013. PGD-Based Computational Vademecum for Efficient Design, Optimization and Control. Archives of Computational Methods in Engineering.
  8. Borzacchiello. Domenico. Aguado. José V.. Chinesta. Francisco. April 2019. Non-intrusive Sparse Subspace Learning for Parametrized Problems. Archives of Computational Methods in Engineering. en. 26. 2. 303–326. 10.1007/s11831-017-9241-4. 126121268 . 1134-3060. 10985/18435. free.