I have 70 CSV files with same columns that I want to do the same process for. Basically what I want is importing, cleaning, writing the file and removing all variables, then repeat for the next one. Because each one is 0.5GB.
How can I do that without loading packages iteratively in an efficient way?
library(tidyverse)
setwd("~/R/R-3.5.1/bin/i386")
df <- read.csv(file.choose(), header = TRUE, sep = ",")
inds <- which(df$pc_no == "DELL")
df[inds - 1, c("event_rep", "loc_id")] <- df[inds, c("pc_no", "cust_id")]
df1 <- df[-inds, ]
write.csv(df1, "df1.csv")
rm(list=ls())
To do that I think I will use this piece of code but don't know where to use it exactly. I.E How can I implement the codes above to do that?
list.files(pattern="^events.*?\\.csv", full.names=TRUE, recursive=FALSE)
lapply(files, function(x) {
files <- function(df1)
})