7

I am examining different methods in outlier detection. I came across sklearn's implementation of Isolation Forest and Amazon sagemaker's implementation of RRCF (Robust Random Cut Forest). Both are ensemble methods based on decision trees, aiming to isolate every single point. The more isolation steps there are, the more likely the point is to be an inlier, and the opposite is true.

However, even after looking at the original papers of the algorithms, I am failing to understand exactly the difference between both algorithms. In what way do they work differently? Is one of them more efficient than the other?

EDIT: I am adding the links to the research papers for more information, as well as some tutorials discussing the topics.

Isolation Forest:

Paper Tutorial

Robust Random Cut Forest:

Paper Tutorial

Houssam Metni
  • 143
  • 2
  • 8

2 Answers2

12

In part of my answers I'll assume you refer to Sklearn's Isolation Forest. I believe those are the 4 main differences:

  1. Code availability: Isolation Forest has a popular open-source implementation in Scikit-Learn (sklearn.ensemble.IsolationForest), while both AWS implementation of Robust Random Cut Forest (RRCF) are closed-source, in Amazon Kinesis and Amazon SageMaker. There is an interesting third party RRCF open-source implementation on GitHub though: https://github.com/kLabUM/rrcf ; but unsure how popular it is yet

  2. Training design: RRCF can work on streams, as highlighted in the paper and as exposed in the streaming analytics service Kinesis Data Analytics. On the other hand, the absence of partial_fit method hints me that Sklearn's Isolation Forest is a batch-only algorithm that cannot readily work on data streams

  3. Scalability: SageMaker RRCF is more scalable. Sklearn's Isolation Forest is single-machine code, which can nonetheless be parallelized over CPUs with the n_jobs parameter. On the other hand, SageMaker RRCF can be used over one machine or multiple machines. Also, it supports SageMaker Pipe mode (streaming data via unix pipes) which makes it able to learn on much bigger data than what fits on disk

  4. the way features are sampled at each recursive isolation: RRCF gives more weight to dimension with higher variance (according to SageMaker doc), while I think isolation forest samples at random, which is one reason why RRCF is expected to perform better in high-dimensional space (picture from the RRCF paper) enter image description here

Olivier Cruchant
  • 3,747
  • 15
  • 18
  • Thank you very much for your response and explanation! – Houssam Metni Jul 28 '20 at 13:13
  • @Olivier Cruchant Thanks for your great answer. May I ask you kindly to have a look at related post [here](https://stackoverflow.com/questions/66643736/incorrect-results-of-isolationforest)? – Mario Mar 16 '21 at 11:07
  • @Olivier Cruchant Thanks for a great explanation. You have mentioned about partial_fit() in RCF. Does RCF really supports partial_fit() if Yes, How can we leverage this in Amazon Sagemaker? – nivedan gowda Jan 18 '22 at 10:20
  • @nivdan gowda, thanks! no partial_fit is a sklearn-specific abstraction. I don't think SageMaker RCF supports streaming training. I'm only aware of Kinesis RCF for this – Olivier Cruchant Jan 18 '22 at 22:00
3

I believe they also differ in how they assign anomaly score. IF's score is based on distance from the root node. RRCF is based on how much a new point changes the tree structure (i.e., shift in the tree size by including the new point). This makes RRCF less sensitive to the sample size.

Ragheb
  • 89
  • 3
  • "The anomaly score assigned to a data point by the tree is defined as the expected change in complexity of the tree as a result adding that point to the tree; which, *in approximation*, is inversely proportional to the resulting depth of the point in the tree." (https://docs.aws.amazon.com/sagemaker/latest/dg/rcf_how-it-works.html) It seems that IF and RRCF ways to assign the anomaly score is approximately the same. – asmaier May 16 '23 at 14:16