Isolation forest explained
Isolation Forest is an algorithm for data anomaly detection initially developed by Fei Tony Liu in 2008.[1] Isolation Forest detects anomalies using binary trees. The algorithm has a linear time complexity and a low memory requirement, which works well with high-volume data.[2] [3] In essence, the algorithm relies upon the characteristics of anomalies, i.e., being few and different, in order to detect anomalies. No density estimation is performed in the algorithm. The algorithm is different from decision tree algorithms in that only the path-length measure or approximation is being used to generate the anomaly score, no leaf node statistics on class distribution or target value is needed.
Isolation Forest is fast because it splits the data space randomly, using randomly selected attribute and randomly selected split point. The anomaly score is invertedly associated with the path-length as anomalies need fewer splits to be isolated, due to the fact that they are few and different.
History
The Isolation Forest (iForest) algorithm was initially proposed by Fei Tony Liu, Kai Ming Ting and Zhi-Hua Zhou in 2008. In 2010, an extension of the algorithm - SCiforest [4] was developed to address clustered and axis-paralleled anomalies. In 2012 the same authors demonstrated that iForest has linear time complexity, a small memory requirement, and is applicable to high dimensional data.
Algorithm
The premise of the Isolation Forest algorithm is that anomalous data points are easier to separate from the rest of the sample. In order to isolate a data point, the algorithm recursively generates partitions on the sample by randomly selecting an attribute and then randomly selecting a split value between the minimum and maximum values allowed for that attribute.An example of random partitioning in a 2D dataset of normally distributed points is given in Fig. 2 for a non-anomalous point and Fig. 3 for a point that's more likely to be an anomaly. It is apparent from the pictures how anomalies require fewer random partitions to be isolated, compared to normal points.
Recursive partitioning can be represented by a tree structure named Isolation Tree, while the number of partitions required to isolate a point can be interpreted as the length of the path, within the tree, to reach a terminating node starting from the root. For example, the path length of point
in Fig. 2 is greater than the path length of
in Fig. 3.
Let
be a set of d-dimensional points and
. An Isolation Tree (iTree) is defined as a data structure with the following properties:
- for each node
in the Tree,
is either an external-node with no child, or an internal-node with one “test” and exactly two child nodes (
and
)
- a test at node
consists of an attribute
and a split value
such that the test
determines the traversal of a data point to either
or
.
In order to build an iTree, the algorithm recursively divides
by randomly selecting an attribute
and a split value
, until either
- the node has only one instance, or
- all data at the node have the same values.
When the iTree is fully grown, each point in
is isolated at one of the external nodes. Intuitively, the anomalous points are those (easier to isolate, hence) with the smaller
path length in the tree, where the path length
of point
is defined as the number of edges
traverses from the root node to get to an external node.
A probabilistic explanation of iTree is provided in the original iForest paper.
Properties of isolation forest
- Sub-sampling: As iForest does not need to isolate all normal instances, it can frequently ignore the majority of the training sample. As a consequence, iForest works very well when the sampling size is kept small, a property that is in contrast with the great majority of existing methods, where a large sampling size is usually desirable.
- Swamping: When normal instances are too close to anomalies, the number of partitions required to separate anomalies increases, a phenomenon known as swamping, which makes it more difficult for iForest to discriminate between anomalies and normal points. One of the main reasons for swamping is the presence of too much data for the purpose of anomaly detection, which implies one possible solution to the problem is sub-sampling. Since it responds very well to sub-sampling in terms of performance, the reduction of the number of points in the sample is also a good way to reduce the effect of swamping.
- Masking: When the number of anomalies is high it is possible that some of those aggregate in a dense and large cluster, making it more difficult to separate the single anomalies and, in turn, to detect such points as anomalous. Similarly, to swamping, this phenomenon (known as “masking”) is also more likely when the number of points in the sample is big and can be alleviated through sub-sampling.
- High Dimensional Data: One of the main limitations of standard, distance-based methods is their inefficiency in dealing with high dimensional datasets.[5] The main reason for that is, in a high dimensional space every point is equally sparse, so using a distance-based measure of separation is pretty ineffective. Unfortunately, high-dimensional data also affects the detection performance of iForest, but the performance can be vastly improved by adding a features selection test, like Kurtosis, to reduce the dimensionality of the sample space.
- Normal Instances Only: iForest performs well even if the training set does not contain any anomalous point, the reason being that iForest describes data distributions in such a way that high values of the path length
correspond to the presence of data points. As a consequence, the presence of anomalies is pretty irrelevant to iForest's detection performance.
Anomaly detection with isolation forest
Anomaly detection with Isolation Forest is a process composed of two main stages:
- in the first stage, a training dataset is used to build iTrees.
- in the second stage, each instance in the test set is passed through these iTrees, and a proper “anomaly score” is assigned to the instance.
Once all the instances in the test set have been assigned an anomaly score, it is possible to mark as “anomaly” any point whose score is greater than a predefined threshold, which depends on the domain the analysis is being applied to.
Anomaly score
The algorithm for computing the anomaly score of a data point is based on the observation that the structure of iTrees is equivalent to that of Binary Search Trees (BST): a termination to an external node of the iTree corresponds to an unsuccessful search in the BST. As a consequence, the estimation of average
for external node terminations is the same as that of the unsuccessful searches in BST, that is
[6] c(m)=\begin{cases}2H(m-1)-
&form>2\ 1&form=2\ 0&otherwise\end{cases}
where
is the testing data size,
is the size of the sample set and
is the harmonic number, which can be estimated by
, where
is the
Euler-Mascheroni constant.
The value of
above represents the average of
given
, so we can use it to normalize
and get an estimation of the anomaly score for a given instance x:
where
is the average value of
from a collection of iTrees. It is interesting to note that for any given instance
:
is close to
then
is very likely to be an anomaly
is smaller than
then
is likely to be a normal value
- if for a given sample all instances are assigned an anomaly score of around
, then it is safe to assume that the sample doesn't have any anomaly
Open source implementations
Original implementation by Fei Tony Liu is Isolation Forest in R.
Other implementations (in alphabetical order):
Other variations of Isolation Forest algorithm implementations:
See also
Notes and References
- Web site: First Isolation Forest implementation on Sourceforge. Liu . Fei Tony.
- Book: Liu. Fei Tony. Ting. Kai Ming. Zhou. Zhi-Hua. 2008 Eighth IEEE International Conference on Data Mining . Isolation Forest . December 2008. 413–422. 10.1109/ICDM.2008.17. 978-0-7695-3502-9. 6505449.
- Liu. Fei Tony. Ting. Kai Ming. Zhou. Zhi-Hua. Isolation-Based Anomaly Detection . ACM Transactions on Knowledge Discovery from Data . December 2008. 6 . 3:1–3:39. 10.1145/2133360.2133363. 207193045 .
- Liu. Fei Tony. Ting. Kai Ming. Zhou. Zhi-Hua. September 2010. On Detecting Clustered Anomalies Using SCiForest. Joint European Conference on Machine Learning and Knowledge Discovery in Databases - ECML PKDD 2010: Machine Learning and Knowledge Discovery in Databases. Lecture Notes in Computer Science. 6322. 274–290. 10.1007/978-3-642-15883-4_18. 978-3-642-15882-7. free.
- Dilini Talagala. Priyanga. Hyndman. Rob J.. Smith-Miles. Kate. 12 Aug 2019. Anomaly Detection in High Dimensional Data. 1908.04000. stat.ML.
- Book: Shaffer, Clifford A. . Data structures & algorithm analysis in Java. 2011. Dover Publications . 9780486485812. 3rd Dover . Mineola, NY. 721884651.
- Web site: Detecting and preventing abuse on LinkedIn using isolation forests . 2023-07-02 . LinkedIn Engineering Blog . en . 13 August 2019 . James . Verbus.
- Hariri . Sahand . Kind . Matias Carrasco . Brunner . Robert J. . April 2021 . Extended Isolation Forest . IEEE Transactions on Knowledge and Data Engineering . 33 . 4 . 1479–1489 . 10.1109/TKDE.2019.2947676 . 53236735 . 1558-2191. 1811.02141 .
- 1910.12362 . Cortes . David . Distance approximation using Isolation Forests . 2019 . stat.ML .