Open In App

Anomaly detection using Isolation Forest

Last Updated : 02 Apr, 2024
Improve
Improve
Like Article
Like
Save
Share
Report

Anomaly detection is vital across industries, revealing outliers in data that signal problems or unique insights. Isolation Forests offer a powerful solution, isolating anomalies from normal data. In this tutorial, we will explore the Isolation Forest algorithm’s implementation for anomaly detection using the Iris flower dataset, showcasing its effectiveness in identifying outliers amidst multidimensional data.

What is Anomaly Detection?

Anomalies, also known as outliers, are data points that deviate significantly from the expected behavior or norm within a dataset. They are crucial to identify because they can signal potential problems, fraudulent activities, or interesting discoveries. Anomaly detection plays a vital role in various fields, including data analysis, machine learning, and network security.

Types of Anomalies

There are essentially three types of anomalies: point anomalies, contextual anomalies, and collective anomalies.

  • Point Anomalies (Global Anomalies): These are the most basic type, representing individual data points that are statistically unusual compared to the rest of the data. For instance, a credit card transaction with an exceptionally high amount might be flagged as a point anomaly.
  • Contextual Anomalies (Conditional Anomalies): These anomalies depend on the specific context or environment surrounding them. They often occur in time-series data, where patterns can change over time. An example is a sudden spike in temperature during winter within weather data.
  • Collective Anomalies: These involve groups of related data points exhibiting abnormal behavior collectively, even if individually they might seem normal. They disrupt the overall data distribution. Identifying collective anomalies often requires complex pattern-based algorithms, and they are commonly found in dynamic environments like network traffic data.

Isolation Forests for Anomaly Detection

Isolation Forest is an unsupervised anomaly detection algorithm particularly effective for high-dimensional data. It operates under the principle that anomalies are rare and distinct, making them easier to isolate from the rest of the data. Unlike other methods that profile normal data, Isolation Forests focus on isolating anomalies. At its core, the Isolation Forest algorithm, it banks on the fundamental concept that anomalies, they deviate significantly, thereby making them easier to identify.

Isolation Forests excel at anomaly detection by leveraging a unique approach: isolating anomalies instead of profiling normal data points. The workings of isolation forests are defined below:

  • Building Isolation Trees: The algorithm starts by creating a set of isolation trees, typically hundreds or even thousands of them. These trees are similar to traditional decision trees, but with a key difference: they are not built to classify data points into specific categories. Instead, isolation trees aim to isolate individual data points by repeatedly splitting the data based on randomly chosen features and split values.
  • Splitting on Random Features: Isolation trees introduce randomness at each node of the tree, a random feature from the dataset is selected. Then, a random split value is chosen within the range of that particular feature’s values. This randomness helps ensure that anomalies, which tend to be distinct from the majority of data points, are not hidden within specific branches of the tree.
  • Isolating Data Points: The data points are then directed down the branches of the isolation tree based on their feature values.
    • If a data point’s value for the chosen feature falls below the split value, it goes to the left branch. Otherwise, it goes to the right branch.
    • This process continues recursively until the data point reaches a leaf node, which simply represents the isolated data point.
  • Anomaly Score: The key concept behind Isolation Forests lies in the path length of a data point through an isolation tree.
    • Anomalies, by virtue of being different from the majority, tend to be easier to isolate. They require fewer random splits to reach a leaf node because they are likely to fall outside the typical range of values for the chosen features.
    • Conversely, normal data points, which share more similarities with each other, might require more splits on their path down the tree before they are isolated.
  • Anomaly Score Calculation: Each data point is evaluated through all the isolation trees in the forest.
    • For each tree, the path length (number of splits) required to isolate the data point is recorded.
    • An anomaly score is then calculated for each data point by averaging the path lengths across all the isolation trees in the forest.
  • Identifying Anomalies: Data points with shorter average path lengths are considered more likely to be anomalies. This is because they were easier to isolate, suggesting they deviate significantly from the bulk of the data. A threshold is set to define the anomaly score that separates normal data points from anomalies. This threshold can be determined based on domain knowledge, experimentation, or established statistical principles.

Key Takeaways:

  • Isolation Forests leverage randomness to isolate data points effectively.
  • Anomalies require fewer splits on average due to their distinct nature.
  • The average path length across all trees serves as an anomaly score.
  • Lower scores indicate a higher likelihood of being an anomaly.

Anomaly detection using Isolation Forest: Implementation

Let’s see implementation for Isolation Forest algorithm for anomaly detection using the Iris flower dataset from scikit-learn. In the context of the Iris flower dataset, the outliers would be data points that do not correspond to any of the three known Iris flower species (Iris Setosa, Iris Versicolor, and Iris Virginica). The following steps are mentioned:

Step 1: Import necessary libraries

Python3
from sklearn.ensemble import IsolationForest
from sklearn.model_selection import train_test_split
from sklearn.datasets import load_iris
import matplotlib.pyplot as plt

Step 2: Loading and Splitting the Dataset

Python3
iris = load_iris()
X = iris.data
y = iris.target
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42)

Step 3: Fitting the model

 This code creates an Isolation Forest classifier instance using the IsolationForest class. Contamination is a parameter that specifies the expected proportion of anomalies in the data. Here, it’s set to 0.1 (10%).

Python3
# initialize and fit the model
clf = IsolationForest(contamination=0.1)
clf.fit(X_train)

Step 4: Predictions

The predict method returns labels indicating whether each data point is classified as normal (1) or anomalous (-1) by the model.

Python3
# predict the anomalies in the data
y_pred_train = clf.predict(X_train)
y_pred_test = clf.predict(X_test)
print(y_pred_train)
print(y_pred_test)

Output:

[ 1  1  1  1 -1  1 -1  1  1 -1  1  1  1  1 -1  1  1  1  1  1  1  1 -1  1
1 1 -1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
1 -1 1 1 -1 1 1 1 1 1 1 1 1 1 -1 1 1 1 1 1 1 1 1 1
1 1 1 1 -1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 -1 1 1]
[ 1 -1 -1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
-1 1 1 1 1 1 1 -1 1 1 1 1 1 1 1 -1 1 1 1 -1 1]

Step 4: Visualization

Python3
def create_scatter_plots(X1, y1, title1, X2, y2, title2):
    fig, axes = plt.subplots(1, 2, figsize=(12, 6))

    # Scatter plot for the first set of data
    axes[0].scatter(X1[y1==1, 0], X1[y1==1, 1], color='green', label='Normal')
    axes[0].scatter(X1[y1==-1, 0], X1[y1==-1, 1], color='red', label='Anomaly')
    axes[0].set_title(title1)
    axes[0].legend()

    # Scatter plot for the second set of data
    axes[1].scatter(X2[y2==1, 0], X2[y2==1, 1], color='green', label='Normal')
    axes[1].scatter(X2[y2==-1, 0], X2[y2==-1, 1], color='red', label='Anomaly')
    axes[1].set_title(title2)
    axes[1].legend()

    plt.tight_layout()
    plt.show()

# scatter plots
create_scatter_plots(X_train, y_pred_train, 'Training Data', X_test, y_pred_test, 'Test Data')

Output:

isolation-(1)

The distribution of the anomalies in the training data is different from the distribution of the anomalies in the test data. In the training data, the anomalies tend to be located on the edges of the plot. In the test data, the anomalies are more scattered throughout the plot.

Advantages of Isolation Forests

  • Effective for Unlabeled Data: Isolation Forests do not require labeled data (normal vs. anomaly) for training, making them suitable for scenarios where labeled data is scarce.
  • Efficient for High-Dimensional Data: The algorithm scales well with high-dimensional data sets, which can be challenging for other anomaly detection methods.
  • Robust to Noise: Isolation Forests are relatively insensitive to noise and outliers within the data, making them reliable for real-world datasets.

The Isolation Forest algorithm offers an efficient solution for identifying anomalies, especially in datasets with multiple dimensions. It stands out by isolating outliers rather than profiling normal cases, making it more adept at uncovering rare instances that differ from the usual pattern.



Like Article
Suggest improvement
Share your thoughts in the comments

Similar Reads