I got a problem running the flowing code:
library("outliers")
#flags the outliers
grubbs.flag <- function(x) {
outliers <- NULL
test <- x
grubbs.result <- grubbs.test(test)
pv <- grubbs.result$p.value
while(pv < 0.05) {
outliers <- c(outliers,as.numeric(strsplit(grubbs.result$alternative," ")[[1]][3]))
test <- x[!x %in% outliers]
grubbs.result <- grubbs.test(test)
pv <- grubbs.result$p.value
}
return(data.frame(X=x,Outlier=(x %in% outliers)))
}
# make a vector consists of infinite decimals as an example
a=c(1,5,7,9,110)
b=c(3,3,3,3,3)
x=a/b
grubbs.flag(x)
The code originally comes from How to repeat the Grubbs test and flag the outliers
If vector x
consist of infinite decimals, there might be an error occurred in test <- x[!x %in% outliers]
, when a outlier exists.
In test <- x[!x %in% outliers]
the infinite decimal outliers
is not recognized as an element of x
, and drops into an end less loop. the reason might be the length of the outliers in x
differed from the length of outliers
So I'm curious how R recognize the length of a infinite decimal vector, and how to deal with this problem.