I have two IO intensive processes that don't do much computing: one is getting and parsing a webpage and the other is storing some data obtained with the parsing in a database. This is going to repeat while the crawling of the web continues.
Is there a method for adding and subtracting the number of threads that are working on each task dynamically so the performance is optimal for the machine where the whole system is running? The method should not involve benchmarking because it's going to be distributed to a number of machines I cannot access beforehand.
Please guide me to some sources or information.