I have a bunch of .R
files which contain functions which call other functions. These functions are spread across multiple files, and my life is complicated by the fact that there are lots of leftover functions which are not required (i.e. not called by the main scripting file).
For example:
main_file.R:
source("sub_file1.R")
source("sub_file2.R")
#let's call a function which will call other functions...
main_function()
sub_file1.R:
main_function = function(){
sub_function1() # here is a function which can be found in sub_file2.R
}
sub_file2.R
sub_function1 = function(){
sub_function2() #oh no, do these nested functions ever end?
}
sub_function2 = function(){
"You've reached the end!"
}
sub_function3 = function(){
"This is a useless subfunction"
}
sub_function4 = function(){
"This is another useless subfunction"
}
My Question
How would you look at these files as a collective and return the minimum set of functions required? For example, in less complicated cases, it would be appropriate to get a summary file like
summary_file.R:
main_function()
main_function = function(){
sub_function1() # here is a function which can be found in sub_file2.R
}
sub_function1 = function(){
sub_function2() #oh no, do these nested functions ever end?
}
sub_function2 = function(){
"You've reached the end!"
}
Are there any good practices to stop this happening in the first place, without resorting to building a package? E.g. do people usually just have one main scripting file and one extra file which contains all the necessary functions? What if a function is really long? Does that necessitate its own .R
file?