Total variation denoising explained

In signal processing, particularly image processing, total variation denoising, also known as total variation regularization or total variation filtering, is a noise removal process (filter). It is based on the principle that signals with excessive and possibly spurious detail have high total variation, that is, the integral of the image gradient magnitude is high. According to this principle, reducing the total variation of the signal—subject to it being a close match to the original signal—removes unwanted detail whilst preserving important details such as edges. The concept was pioneered by L. I. Rudin, S. Osher, and E. Fatemi in 1992 and so is today known as the ROF model.[1]

This noise removal technique has advantages over simple techniques such as linear smoothing or median filtering which reduce noise but at the same time smooth away edges to a greater or lesser degree. By contrast, total variation denoising is a remarkably effective edge-preserving filter, i.e., simultaneously preserving edges whilst smoothing away noise in flat regions, even at low signal-to-noise ratios.[2]

1D signal series

xn

, we can, for example, define the total variation as

V(x)=\sumn|xn+1-xn|.

Given an input signal

xn

, the goal of total variation denoising is to find an approximation, call it

yn

, that has smaller total variation than

xn

but is "close" to

xn

. One measure of closeness is the sum of square errors:

\operatornameE(x,y)=

1
n

\sumn(xn-

2.
y
n)

So the total-variation denoising problem amounts to minimizing the following discrete functional over the signal

yn

:

\operatornameE(x,y)+λV(y).

By differentiating this functional with respect to

yn

, we can derive a corresponding Euler–Lagrange equation, that can be numerically integrated with the original signal

xn

as initial condition. This was the original approach.[1] Alternatively, since this is a convex functional, techniques from convex optimization can be used to minimize it and find the solution

yn

.[3]

Regularization properties

The regularization parameter

λ

plays a critical role in the denoising process. When

λ=0

, there is no smoothing and the result is the same as minimizing the sum of squares. As

λ\toinfty

, however, the total variation term plays an increasingly strong role, which forces the result to have smaller total variation, at the expense of being less like the input (noisy) signal. Thus, the choice of regularization parameter is critical to achieving just the right amount of noise removal.

2D signal images

We now consider 2D signals y, such as images. The total-variation norm proposed by the 1992 article is

V(y)=\sumi,j\sqrt{|yi+1,j-yi,j|2+|yi,j+1-yi,j|2}

and is isotropic and not differentiable. A variation that is sometimes used, since it may sometimes be easier to minimize, is an anisotropic version

V\operatorname{aniso}(y)=\sumi,j\sqrt{|yi+1,j-yi,j|2}+\sqrt{|yi,j+1-yi,j|2}=\sumi,j|yi+1,j-yi,j|+|yi,j+1-yi,j|.

The standard total-variation denoising problem is still of the form

miny[\operatornameE(x,y)+λV(y)],

where E is the 2D L2 norm. In contrast to the 1D case, solving this denoising is non-trivial. A recent algorithm that solves this is known as the primal dual method.[4]

Due in part to much research in compressed sensing in the mid-2000s, there are many algorithms, such as the split-Bregman method, that solve variants of this problem.

Rudin–Osher–Fatemi PDE

Suppose that we are given a noisy image

f

and wish to compute a denoised image

u

over a 2D space. ROF showed that the minimization problem we are looking to solve is:[5]

minu\in\operatorname{BV(\Omega)}\|u\|\operatorname{TV(\Omega)}+{λ\over2}

2
\int
\Omega(f-u)

dx

where \operatorname(\Omega) is the set of functions with bounded variation over the domain

\Omega

, \operatorname(\Omega) is the total variation over the domain, and \lambda is a penalty term. When u is smooth, the total variation is equivalent to the integral of the gradient magnitude:

\|u\|\operatorname{TV(\Omega)}=\int\Omega\|\nablau\|dx

where \|\cdot\| is the Euclidean norm. Then the objective function of the minimization problem becomes:\min_ \; \int_\Omega\left[\|\nabla u\| + {\lambda \over 2}(f-u)^2 \right] \, dxFrom this functional, the Euler-Lagrange equation for minimization – assuming no time-dependence – gives us the nonlinear elliptic partial differential equation:For some numerical algorithms, it is preferable to instead solve the time-dependent version of the ROF equation: = \nabla\cdot\left(\right) + \lambda(f-u)

Applications

The Rudin–Osher–Fatemi model was a pivotal component in producing the first image of a black hole.[6]

See also

External links

Notes and References

  1. Rudin. L. I.. Osher. S.. Fatemi. E.. 1992. Nonlinear total variation based noise removal algorithms. Physica D. 60. 1–4. 259–268. 10.1016/0167-2789(92)90242-f. 1992PhyD...60..259R. 10.1.1.117.1675.
  2. D. . Strong . T. . Chan . Edge-preserving and scale-dependent properties of total variation regularization . Inverse Problems . 19 . 6 . S165–S187 . 2003 . 10.1088/0266-5611/19/6/059. 2003InvPr..19S.165S . 250761777 .
  3. M. A. . Little . Nick S. . Jones . 2010 . Sparse Bayesian Step-Filtering for High-Throughput Analysis of Molecular Machine Dynamics . 2010 IEEE International Conference on Acoustics, Speech and Signal Processing . ICASSP 2010 Proceedings .
  4. A. . Chambolle . 2004 . 10.1.1.160.5226 . An algorithm for total variation minimization and applications . Journal of Mathematical Imaging and Vision . 20 . 89–97 . 10.1023/B:JMIV.0000011325.36760.1e. 207622122 .
  5. Web site: Rudin–Osher–Fatemi Total Variation Denoising using Split Bregman. Getreuer. Pascal. 2012.
  6. Web site: Rudin–Osher–Fatemi Model Captures Infinity and Beyond. 2019-04-15. IPAM. en. 2019-08-04.