I'm wondering whether using nested functions in R can lead to an increase in memory usage. Imagine you have a big data matrix, X, that is already using most of memory available to R, and you have some nested functions that are passing down this X matrix. Something like:
n=1000;p=100000
X = matrix(rnorm(n*p),nrow=n) # relatively big data matrix
nested.function <-function(B){ #simple function that does nothing
return(B)
}
my.function <-function(A){
nested.function(B=A)
}
my.function(A=X)
Obviously this example is stupid as the functions don't do anything. But for the sake or argumentation, how much memory is necessary to run my.function?
Is it at least 3 times the memory used to store X? Is X stored once in the global environment as X, then also inside my.function as A, and also inside nested.function as B?
At first I thought it would be stored only once and then the functions would point towards X, but if A/B are changed inside the functions, then they need to be their own element..
PS: the question is also relevant (I think) when only using a single function: if an object X is duplicated as soon as you call a foo(X), then R will be killed if your X is already using almost all the memory available.
Thanks!