I'm currently working on a program that creates a pie chart based on frequencies of letters in a text file, my test file is relatively large and although my program works great on smaller files it is very slow for large files. I want to cut down the time it takes by figuring out a more efficient way to search through the text file and remove special characters and numbers. This is the code I have right now for this portion:
public class readFile extends JPanel {
protected static String stringOfChar = "";
public static String openFile(){
String s = "";
try {
BufferedReader reader = new BufferedReader(new FileReader("xWords.txt"));
while((s = reader.readLine()) != null){
String newstr = s.replaceAll("[^a-z A-Z]"," ");
stringOfChar+=newstr;
}
reader.close();
return stringOfChar;
}
catch (Exception e) {
System.out.println("File not found.");
}
return stringOfChar;
}
The code reads through the text file character by character, replacing all special characters with a space, after this is done I sort the string into a hashmap for characters and frequencies.
I know from testing that this portion of the code is what is causing the bulk of extra time to process the file, but I'm not sure how I could replace all the characters in an efficient manner.