odd request, I know, but I'm working on a program as a learning exercise which takes a .txt file containing a bunch of URLS pointing to text files on the web. It then hashes each word in each text and allows the user to search.
I'm building the program twice, once without concurrency, and once with. I'm just about done with the hashing part of the program sans-concurrency, and my timings show that the time scales fairly linearly with the number of URLS in the original file.
The slowest part of the process, though, is actually retrieving the URLS from the web. Currently I am doing this like so
URL url = new URL(revURL);
Scanner revScanner = new Scanner(url.openStream());
where revURL is a string passed to the method from main. Is there a faster way to do retrieve those files, or is this about as quick as it will get without breaking into concurrency?