I have the following large dataframe:
HURS70.IN1 = 20000 obs. of 3655 variables
When I try to read this in to R it takes a LONG time and often causes the system to crash:
HURS70.IN1 = read.delim("hurs_wfdei_1971_1980_-180_30.txt",
header=TRUE, sep=" ", quote="")
I have thus opted to split the dataframe into smaller chunks as follows:
HURS70.1 = HURS70.IN1[1:2000, 2925:3655]
HURS70.2 = HURS70.IN1[2001:4000, 2925:3655]
HURS70.3 = HURS70.IN1[4001:6000, 2925:3655]
HURS70.4 = HURS70.IN1[6001:8000, 2925:3655]
HURS70.5 = HURS70.IN1[8001:10000, 2925:3655]
HURS70.6 = HURS70.IN1[10001:12000, 2925:3655]
HURS70.7 = HURS70.IN1[12001:14000, 2925:3655]
HURS70.8 = HURS70.IN1[14001:16000, 2925:3655]
HURS70.9 = HURS70.IN1[16001:18000, 2925:3655]
HURS70.10 = HURS70.IN1[18001:20000, 2925:3655]
Is there a more efficient way to do this? I realise reading in 10 repetitive lines like this is amateur programming and inefficient but I'm not sure how else to do this, and if I create a loop I guess the system will run really slow again. Any suggestions?