Foreground detection is one of the major tasks in the field of computer vision and image processing whose aim is to detect changes in image sequences. Background subtraction is any technique which allows an image's foreground to be extracted for further processing (object recognition etc.).
Many applications do not need to know everything about the evolution of movement in a video sequence, but only require the information of changes in the scene, because an image's regions of interest are objects (humans, cars, text etc.) in its foreground. After the stage of image preprocessing (which may include image denoising, post processing like morphology etc.) object localisation is required which may make use of this technique.
Foreground detection separates foreground from background based on these changes taking place in the foreground. It is a set of techniques that typically analyze video sequences recorded in real time with a stationary camera.
All detection techniques are based on modelling the background of the image, i.e. set the background and detect which changes occur. Defining the background can be very difficult when it contains shapes, shadows, and moving objects. In defining the background, it is assumed that the stationary objects could vary in color and intensity over time.
Scenarios where these techniques apply tend to be very diverse. There can be highly variable sequences, such as images with very different lighting, interiors, exteriors, quality, and noise. In addition to processing in real time, systems need to be able to adapt to these changes.
A very good foreground detection system should be able to:
Background subtraction is a widely used approach for detecting moving objects in videos from static cameras. The rationale in the approach is that of detecting the moving objects from the difference between the current frame and a reference frame, often called "background image", or "background model". Background subtraction is mostly done if the image in question is a part of a video stream. Background subtraction provides important cues for numerous applications in computer vision, for example surveillance tracking or human pose estimation.
Background subtraction is generally based on a static background hypothesis which is often not applicable in real environments. With indoor scenes, reflections or animated images on screens lead to background changes. Similarly, due to wind, rain or illumination changes brought by weather, static backgrounds methods have difficulties with outdoor scenes.[1]
The temporal average filter is a method that was proposed at the Velastin. This system estimates the background model from the median of all pixels of a number of previous images. The system uses a buffer with the pixel values of the last frames to update the median for each image.
To model the background, the system examines all images in a given time period called training time. At this time, we only display images and will find the median, pixel by pixel, of all the plots in the background this time.
After the training period for each new frame, each pixel value is compared with the input value of funds previously calculated. If the input pixel is within a threshold, the pixel is considered to match the background model and its value is included in the pixbuf. Otherwise, if the value is outside this threshold pixel is classified as foreground, and not included in the buffer.
This method cannot be considered very efficient because they do not present a rigorous statistical basis and requires a buffer that has a high computational cost.
A robust background subtraction algorithm should be able to handle lighting changes, repetitive motions from clutter and long-term scene changes.[2] The following analyses make use of the function of V(x,y,t) as a video sequence where t is the time dimension, x and y are the pixel location variables. e.g. V(1,2,3) is the pixel intensity at (1,2) pixel location of the image at t = 3 in the video sequence.
A motion detection algorithm begins with the segmentation part where foreground or moving objects are segmented fromthe background. The simplest way to implement this is to take an image as background and take the frames obtained at the timet, denoted by I(t) to compare with the background image denoted by B. Here using simple arithmetic calculations, we cansegment out the objects simply by using image subtraction technique of computer vision meaning for each pixels in I(t), take thepixel value denoted by P[I(t)] and subtract it with the corresponding pixels at the same position on the background imagedenoted as P[B].
In mathematical equation, it is written as:
P[F(t)]=P[I(t)]-P[B]
The background is assumed to be the frame at time t. This difference image would only show some intensity for the pixel locations which have changed in the two frames. Though we have seemingly removed the background, this approach will only work for cases where all foreground pixels are moving, and all background pixels are static. A threshold "Threshold" is put on this difference image to improve the subtraction (see Image thresholding):
|P[F(t)]-P[F(t+1)]|>Threshold
This means that the difference image's pixels' intensities are 'thresholded' or filtered on the basis of value of Threshold.[3] The accuracy of this approach is dependent on speed of movement in the scene. Faster movements may require higher thresholds.
For calculating the image containing only the background, a series of preceding images are averaged. For calculating the background image at the instant t:
B(x,y,t)={1\overN}
N | |
\sum | |
i=1 |
V(x,y,t-i)
where N is the number of preceding images taken for averaging. This averaging refers to averaging corresponding pixels in the given images. N would depend on the video speed (number of images per second in the video) and the amount of movement in the video.[4] After calculating the background B(x,y,t) we can then subtract it from the image V(x,y,t) at time t = t and threshold it. Thus the foreground is:
|V(x,y,t)-B(x,y,t)|>Th
where Th is a threshold value. Similarly, we can also use median instead of mean in the above calculation of B(x,y,t).
Usage of global and time-independent thresholds (same Th value for all pixels in the image) may limit the accuracy of the above two approaches.
For this method, Wren et al.[5] propose fitting a Gaussian probabilistic density function (pdf) on the most recent
n
t
\mut
\sigma
2 | |
t |
\mu0=I0
2 | |
\sigma | |
0 |
=\langlesomedefaultvalue\rangle
where
It
t
Note that background may change over time (e.g. due to illumination changes or non-static background objects). To accommodate for that change, at every frame
t
\mut=\rhoIt+(1-\rho)\mut-1
2 | |
\sigma | |
t |
=d2\rho+(1-\rho)\sigma
2 | |
t-1 |
d=|(It-\mut)|
Where
\rho
\rho=0.01
d
We can now classify a pixel as background if its current intensity lies within some confidence interval of its distribution's mean:
|(It-\mut)| | |
\sigmat |
>k\longrightarrowforeground
|(It-\mut)| | |
\sigmat |
\lek\longrightarrowbackground
where the parameter
k
k=2.5
k
k
In a variant of the method, a pixel's distribution is only updated if it is classified as background. This is to prevent newly introduced foreground objects from fading into the background. The update formula for the mean is changed accordingly:
\mut=M\mut-1+(1-M)(It\rho+(1-\rho)\mut-1)
where
M=1
It
M=0
M=1
Mixture of Gaussians method approaches by modelling each pixel as a mixture of Gaussians and uses an on-line approximation to update the model. In this technique, it is assumed that every pixel's intensity values in the video can be modeled using a Gaussian mixture model.[6] A simple heuristic determines which intensities are most probably of the background. Then the pixels which do not match to these are called the foreground pixels. Foreground pixels are grouped using 2D connected component analysis.
At any time t, a particular pixel (
x0,y0
X1,\ldots,Xt=\{V(x0,y0,i):1\leqslanti\leqslantt\}
This history is modeled by a mixture of K Gaussian distributions:
P(Xt)=
K | |
\sum | |
i=1 |
\omegai,tN\left(Xt\mid\mui,t,\sigmai,t\right)
where:
N\left(Xt\mid\mui,t,\sigmai,t\right)=\dfrac{1}{(2\pi)D/2
First, each pixel is characterized by its intensity in RGB color space. Then probability of observing the current pixel is given by the following formula in the multidimensional case:
P(Xt)=
K | |
\sum | |
i=1 |
\omegai,tη\left(Xt,\mui,t,\sigmai,t\right)
Where K is the number of distributions, ω is a weight associated to the ith Gaussian at time t and μ, Σ are the mean and standard deviation of said Gaussian respectively.
η\left(Xt,\mui,t,\sigmai,t\right)=\dfrac{1}{(2\pi)D/2
Once the parameters initialization is made, a first foreground detection can be made then the parameters are updated. The first B Gaussian distribution which exceeds the threshold T is retained for a background distribution:
B\omega | |
B=\operatorname{argmin}\left(\Sigma | |
i,t |
>T\right)
The other distributions are considered to represent a foreground distribution. Then, when the new frame incomes at times
t+1
\left(\left(Xt+1-\mui,t\right)T
-1 | |
\sigma | |
i,t |
\left(Xt+1-\mui,t\right)\right)0.5<k ⋅ \sigmai,t
where k is a constant threshold equal to
2.5
Case 1: A match is found with one of the k Gaussians. For the matched component, the update is done as follows:[7]
2+\rho\left(X | |
\sigma | |
x+1 |
-\mux+1\right)\left(Xx+1-\mux+1\right)T
Power and Schoonees [3] used the same algorithm to segment the foreground of the image:
\sigmai,t+1=\left(1-\alpha\right)\omegai,t+\alphaP\left(k\midXt,\varphi\right)
The essential approximation to
P\left(k\mid Xt,\varphi\right)
Mk,t
Mk,t=\begin{cases}1&match,\ 0&otherwise.\end{cases}
Case 2: No match is found with any of the
K
K
ki.t=lowpriorweight
\mui,t+1=Xt+1
2 | |
\sigma | |
i.t+1 |
=largeinitialvariance
Once the parameter maintenance is made, foreground detection can be made and so on. An on-line K-means approximation is used to update the Gaussians. Numerous improvements of this original method developed by Stauffer and Grimson have been proposed and a complete survey can be found in Bouwmans et al.[7] A standard method of adaptive backgrounding is averaging the images over time, creating a background approximation which is similar to the current static scene except where motion occur.
Several surveys which concern categories or sub-categories of models can be found as follows:
For more details, please see [19]
Several comparison/evaluation papers can be found in the literature:
The Background Subtraction Website (T. Bouwmans, Univ. La Rochelle, France) contains a comprehensive list of the references in the field, and links to available datasets and software.
The BackgroundSubtractorCNT library implements a very fast and high quality algorithm written in C++ based on OpenCV. It is targeted at low spec hardware but works just as fast on modern Linux and Windows. (For more information: https://github.com/sagi-z/BackgroundSubtractorCNT).
The BGS Library (A. Sobral, Univ. La Rochelle, France) provides a C++ framework to perform background subtraction algorithms. The code works either on Windows or on Linux. Currently the library offers more than 30 BGS algorithms. (For more information: https://github.com/andrewssobral/bgslibrary)