If you only have a few rules in a discrete event simulation this is not critical but if you have a lot of them and they can interfere with each other and you may want to track the "which" and "where" they are used.
- Does anybody know how to get the code below as fast as the original function?
- Are there better options than
eval(parse(...)
?
Here is an simple example which shows that I loose a factor 100 in speed. Assume you run a simulation and one (of many rules) is: Select the states with time less 5:
> a <- rnorm(100, 50, 10)
> print(summary(microbenchmark::microbenchmark(a[a < 5], times = 1000L, unit = "us")))
expr min lq mean median uq max neval
a[a < 5] 0.76 1.14 1.266745 1.141 1.52 11.404 1000
myfun <- function(a0) {
return(eval(parse(text = myrule)))
}
> myrule <- "a < a0" # The rule could be read from a file.
print(summary(microbenchmark::microbenchmark(a[myfun(5)], times = 1000L, unit = "us")))
expr min lq mean median uq max neval
a[myfun(5)] 137.61 140.271 145.6047 141.411 142.932 343.644 1000
Note: I don't think that I need an extra rete package which can do the book keeping efficiently. But if there are other opinions, let me know...