Feature scaling is a method used to normalize the range of independent variables or features of data. In data processing, it is also known as data normalization and is generally performed during the data preprocessing step.
Since the range of values of raw data varies widely, in some machine learning algorithms, objective functions will not work properly without normalization. For example, many classifiers calculate the distance between two points by the Euclidean distance. If one of the features has a broad range of values, the distance will be governed by this particular feature. Therefore, the range of all features should be normalized so that each feature contributes approximately proportionately to the final distance.
Another reason why feature scaling is applied is that gradient descent converges much faster with feature scaling than without it.[1]
It's also important to apply feature scaling if regularization is used as part of the loss function (so that coefficients are penalized appropriately).
Empirically, feature scaling can improve the convergence speed of stochastic gradient descent,. In support vector machines,[2] it can reduce the time to find support vectors. Feature scaling is also often used in applications involving distances and similarities between data points, such as clustering and similarity search. As an example, the K-means clustering algorithm is sensitive to feature scales.
Also known as min-max scaling or min-max normalization, rescaling is the simplest method and consists in rescaling the range of features to scale the range in [0, 1] or [−1, 1]. Selecting the target range depends on the nature of the data. The general formula for a min-max of [0, 1] is given as:[3]
x'=
x-min(x) | |
max(x)-min(x) |
where
x
x'
To rescale a range between an arbitrary set of values [a, b], the formula becomes:
x'=a+
(x-min(x))(b-a) | |
max(x)-min(x) |
where
a,b
x'=
x-\bar{x | |
where
x
x'
\bar{x}=average(x)
See also: Standard score. In machine learning, we can handle various types of data, e.g. audio signals and pixel values for image data, and this data can include multiple dimensions. Feature standardization makes the values of each feature in the data have zero-mean (when subtracting the mean in the numerator) and unit-variance. This method is widely used for normalization in many machine learning algorithms (e.g., support vector machines, logistic regression, and artificial neural networks).[4] [5] The general method of calculation is to determine the distribution mean and standard deviation for each feature. Next we subtract the mean from each feature. Then we divide the values (mean is already subtracted) of each feature by its standard deviation.
x'=
x-\bar{x | |
Where
x
\bar{x}=average(x)
\sigma
Robust scaling, also known as standardization using median and interquartile range (IQR), is designed to be robust to outliers. It scales features using the median and IQR as reference points instead of the mean and standard deviation:where
Q1(x),Q2(x),Q3(x)
Unit vector normalization regards each individual data point as a vector, and divide each by its vector norm, to obtain
x'=x/\|x\|
For example, if
x=(v1,v2,v3)