Linear matrix inequality explained

In convex optimization, a linear matrix inequality (LMI) is an expression of the form

\operatorname{LMI}(y):=A0+y1A1+y2A2+ … +ymAm\succeq0

where

y=[yi,~i=1,...,m]

is a real vector,

A0,A1,A2,...,Am

are

n x n

symmetric matrices

Sn

,

B\succeq0

is a generalized inequality meaning

B

is a positive semidefinite matrix belonging to the positive semidefinite cone

S+

in the subspace of symmetric matrices

S

.

This linear matrix inequality specifies a convex constraint on 

y

.

Applications

There are efficient numerical methods to determine whether an LMI is feasible (e.g., whether there exists a vector y such that LMI(y) ≥ 0), or to solve a convex optimization problem with LMI constraints.Many optimization problems in control theory, system identification and signal processing can be formulated using LMIs. Also LMIs find application in Polynomial Sum-Of-Squares. The prototypical primal and dual semidefinite program is a minimization of a real linear function respectively subject to the primal and dual convex cones governing this LMI.

Solving LMIs

A major breakthrough in convex optimization was the introduction of interior-point methods. These methods were developed in a series of papers and became of true interest in the context of LMI problems in the work of Yurii Nesterov and Arkadi Nemirovski.

See also

References

External links