I'm trying to rip the html page source of a website to get an email. When I run the ripper/dumper or whatever you want to call it, it gets all the source code but stops at line 160 but I can manually go to the webpage>right click>click view page source then parse the text. The entire source code is a little over 200 lines. The only problem with manually going to each page and right clicking is that there are over 100k pages and it's gonna take a while.
Here's the code i'm using to get the page source:
public static void main(String[] args) throws IOException, InterruptedException {
URL url = new URL("http://www.runelocus.com/forums/member.php?102786-wapetdxzdk&tab=aboutme#aboutme");
URLConnection connection = url.openConnection();
connection.setDoInput(true);
InputStream inStream = connection.getInputStream();
BufferedReader input = new BufferedReader(new InputStreamReader(
inStream));
String html = "";
String line = "";
while ((line = input.readLine()) != null)
html += line;
System.out.println(html);
}