0

In another question on stackoverflow I got a hint that I can use thread pools for the producer-consumer pattern my crawlers are creating.

However, I just cannot find out how to implement it.

In a producer consumer thread on SO they just use the producer consumer to manage the producers and consumers (which in my case would be the crawler themselves; and this is not so much different from my for-loop), but this does not seem the intention of the commentor in my article (as he could not see I used a for loop). The workload is still shared via a queue there.

I also thought about passing a Website object to ExecutorService.submit() with this implementation (and remove Runnable from Crawler):

public class Website implements Runnable {
    private URL url;

    public Website(URL url) {
        this.url = url;
    }

    @Override
    public void run() {
        Crawler crawler = new Crawler();
        crawler.crawl(url);
    }
}

But the problem is that

  1. I think there are too many crawlers being generated
  2. Crawler() expects a queue of already visited websites

How can I properly implement the producer, consumer pattern in my crawler problem? I’m getting totally confused about it all. I checked so many websites about it on the web and all seem to use it differently.

Community
  • 1
  • 1
aufziehvogel
  • 7,167
  • 5
  • 34
  • 56
  • It seems like each crawler is both a producer and a consumer. The system must load one or more seed URLs to put into the queue, but is there anything else involved other than that? – David Harkness Jul 14 '12 at 18:59
  • Yes, there is other stuff going on after the website has been found and the source code has been analyzed, but I think that is not targeted by the producer-consumer queue for websites. Websites can only be found by crawlers. The **initial set** comes from the main class at the moment (manually adding a few URLs) and will come from a GUI or config file in the future. And yes, crawlers are **both producers and consumers**. – aufziehvogel Jul 14 '12 at 19:06
  • Are you using threads now? A thread pool isn't really necessary if you are going to manage a constant set of crawlers (e.g. always 10). Not to say that it wouldn't be useful. All that matters is that each crawler gets its own thread. Is that the case now? – David Harkness Jul 14 '12 at 19:08
  • Yes, that’s what I am doing now (start several threads in a for loop and let them run until they are all stopped for some reason). Just had the feeling (after the comment), that thread pools might be a much more elegant way to implement this. But if that’s not the case, then I do not have to change my implementation. – aufziehvogel Jul 14 '12 at 19:09
  • A thread pool would be the way to go eventually so you could tune the number of crawlers on-the-fly via a dashboard, but it won't help you yet. You must first figure out why your threads are blocking on `take` of a non-empty queue. That makes no sense. – David Harkness Jul 14 '12 at 19:14

1 Answers1

0

I think I will need to see more code in order to understand.
However, what you can do in order to have producer-consumer is have your Crawler class be a consumer,
and have the code the uses the executor as a consumer.
The Crawler will take objects of WebSite from a queue or other shared data structure (synchronized data structure) between the producer and the consumer.
What you should ask yourself when selecting the data structure is some of the following questions -
A. Are there priorities among the sites to be crawled?
If so - consider using a PriorityBlockingQueue.

B. Is the crawling order important but all priorities are the same?
If so - consider using a LinkedBlockingQueue

C. Can you categorize the links somehow?
If so , maybe you can have several shared data structures, with a map of categories to them.

I am sure you can come up with many ideas on how to build this shared data structure on you own, these were just my thoughts.


To conclude -
1. Have Crawler extend Runnable
2. Have Crawler extract a "Job" (WebSite class) from a shared data structure (i.e - blocking queue).
3. Have producer put a job to the shared data structure before using the executor.

Yair Zaslavsky
  • 4,091
  • 4
  • 20
  • 27