The general approach would be to separate the crawling, and the downloading tasks into separate worker Threads, with a maximum number of Threads, depending on your memory requirements (i.e. maximum RAM you want to use for storing all this info).
However, crawler4j already gives you this functionality. By splitting downloading and crawling into separate Threads, you try to maximize the utilization of your connection, pulling down as much data as both your connection can handle, and as the servers providing the information can send you. The natural limitation to this is that, even if you spawn 1,000 Threads, if the servers are only given you the content at 0.3k per second, that still only 300 KB per second that you'll be downloading. But you just don't have any control over that aspect of it, I'm afraid.
The other way to increase the speed is to run the crawler on a system with a fatter pipe to the internet, since your maximum download speed is, I'm guessing, the limiting factor to how fast you can get data currently. For example, if you were running the crawling on an AWS instance (or any of the cloud application platforms), you would benefit from their extremely high speed connections to backbones, and shorten the amount of time it takes to crawl a collection of websites by effectively expanding your bandwidth far beyond what you're going to get at a home or office connection (unless you work at an ISP, that is).
It's theoretically possible that, in a situation where your pipe is extremely large, the limitation starts to become the maximum write speed of your disk, for any data that you're saving to local (or network) disk storage.