In convex optimization, a linear matrix inequality (LMI) is an expression of the form
\operatorname{LMI}(y):=A0+y1A1+y2A2+ … +ymAm\succeq0
y=[yi,~i=1,...,m]
A0,A1,A2,...,Am
n x n
Sn
B\succeq0
B
S+
S
This linear matrix inequality specifies a convex constraint on
y
There are efficient numerical methods to determine whether an LMI is feasible (e.g., whether there exists a vector y such that LMI(y) ≥ 0), or to solve a convex optimization problem with LMI constraints.Many optimization problems in control theory, system identification and signal processing can be formulated using LMIs. Also LMIs find application in Polynomial Sum-Of-Squares. The prototypical primal and dual semidefinite program is a minimization of a real linear function respectively subject to the primal and dual convex cones governing this LMI.
A major breakthrough in convex optimization was the introduction of interior-point methods. These methods were developed in a series of papers and became of true interest in the context of LMI problems in the work of Yurii Nesterov and Arkadi Nemirovski.