I am evaluating different data from a textfile in a rather large algorithm.
If the text file contains more than datapoints (the minimum I need is sth. like 1.3 million datapoints) it gives the following error:
Exception in thread "main" java.lang.OutOfMemoryError: GC overhead limit exceeded
at java.util.regex.Matcher.<init>(Unknown Source)
at java.util.regex.Pattern.matcher(Unknown Source)
at java.lang.String.replaceAll(Unknown Source)
at java.util.Scanner.processFloatToken(Unknown Source)
at java.util.Scanner.nextDouble(Unknown Source)
When I'm running it in Eclipse with the following settings for the installed jre6 (standard VM):
-Xms20m -Xmx1024m -XX:MinHeapFreeRatio=20 -XX:MaxHeapFreeRatio=40 -XX:NewSize=10m
-XX:MaxNewSize=10m -XX:SurvivorRatio=6 -XX:TargetSurvivorRatio=80
-XX:+CMSClassUnloadingEnabled
Note that it works fine if I only run through part of the textfile.
Now I've read a lot about this subject and it seems that somewhere I must have either a data leak or I'm storing too much data in arrays (which I think I do).
Now my problem is: how can I work around this? Is it possible to change my settings such that I can still perform the computation or do I really need more computational power?