I've got wrapper for BufferedReader
that reads in files one after the other to create an uninterrupted stream across multiple files:
import java.io.BufferedReader;
import java.io.FileInputStream;
import java.io.FileReader;
import java.io.IOException;
import java.io.InputStream;
import java.io.InputStreamReader;
import java.io.Reader;
import java.util.ArrayList;
import java.util.zip.GZIPInputStream;
/**
* reads in a whole bunch of files such that when one ends it moves to the
* next file.
*
* @author isaak
*
*/
class LogFileStream implements FileStreamInterface{
private ArrayList<String> fileNames;
private BufferedReader br;
private boolean done = false;
/**
*
* @param files an array list of files to read from, order matters.
* @throws IOException
*/
public LogFileStream(ArrayList<String> files) throws IOException {
fileNames = new ArrayList<String>();
for (int i = 0; i < files.size(); i++) {
fileNames.add(files.get(i));
}
setFile();
}
/**
* advances the file that this class is reading from.
*
* @throws IOException
*/
private void setFile() throws IOException {
if (fileNames.size() == 0) {
this.done = true;
return;
}
if (br != null) {
br.close();
}
//if the file is a .gz file do a little extra work.
//otherwise read it in with a standard file Reader
//in either case, set the buffer size to 128kb
if (fileNames.get(0).endsWith(".gz")) {
InputStream fileStream = new FileInputStream(fileNames.get(0));
InputStream gzipStream = new GZIPInputStream(fileStream);
// TODO this probably needs to be modified to work well on any
// platform, UTF-8 is standard for debian/novastar though.
Reader decoder = new InputStreamReader(gzipStream, "UTF-8");
// note that the buffer size is set to 128kb instead of the standard
// 8kb.
br = new BufferedReader(decoder, 131072);
fileNames.remove(0);
} else {
FileReader filereader = new FileReader(fileNames.get(0));
br = new BufferedReader(filereader, 131072);
fileNames.remove(0);
}
}
/**
* returns true if there are more lines available to read.
* @return true if there are more lines available to read.
*/
public boolean hasMore() {
return !done;
}
/**
* Gets the next line from the correct file.
* @return the next line from the files, if there isn't one it returns null
* @throws IOException
*/
public String nextLine() throws IOException {
if (done == true) {
return null;
}
String line = br.readLine();
if (line == null) {
setFile();
return nextLine();
}
return line;
}
}
If I construct this object on a large list of files (300MB worth of files), then print nextLine()
over and over again in a while loop performance continually degrades until there is no more RAM to use. This happens even if I'm reading in files that are ~500kb and using a virtual machine that has 32MB of memory.
I want this code to be able to run on positively massive data-sets (hundreds of gigabytes worth of files) and it is a component of a program that needs to run with 32MB or less of memory.
The files that are used are mostly labeled CSV files, hence the use of Gzip to compress them on disk. This reader needs to handle gzip and uncompressed files.
Correct me if I'm wrong, but once a file has been read through and had its lines spat out the data from that file, the objects related to that file, and everything else should be viable for garbage collection?