1

Before I go too much further, I would like to assure you that I have done my due diligence and searched the web for advice/answers. In particular, I looked at the following post:

Calling Different Webservices in parallel from Webapp

In the above post, you will see that user1669664 has to "make about 15 different webserivce calls all from one method."

I have read the best answer, supplied by matt b. This answer basically entails writing a Callable for each different Webservice call.

The thing is...

I have a similar problem on a much bigger scale - I need to make about 230 webservice calls.

I would be grateful to hear advice/suggestions. I don't want to write 230 Callables...!

Thank you.

Community
  • 1
  • 1
user2318704
  • 95
  • 2
  • 9
  • You can write them within a loop, if they do not vary except the url. What you need to provide is an Array with all 230 URLs to call. – kism3t Apr 12 '17 at 09:48
  • How on earth does a single request need to make 230 webservice calls? – Kayaman Apr 12 '17 at 09:50
  • Hi kism3t...you are right, only the url will differ. Could you expand on your answer, please. At present my code is running through a loop, calling each webservice one-at-a-time. Problem with this approach is that it takes too long. – user2318704 Apr 12 '17 at 10:24
  • Hi Kayaman...All I can say is that I work for a large organisation with many locations. I need to send a message to each of the locations. Anything you can do to help would be great. Thank you. – user2318704 Apr 12 '17 at 10:26
  • 1
    It sounds like you've got bigger problems than you can fix. Rolling your own solution for something like that is mistake #1, having inexperienced people trying to improve that is mistake #2. You don't need a webservice, you'd probably do great with a simple pub-sub message queue. – Kayaman Apr 12 '17 at 10:28
  • Hi Kayaman...thanks for getting back to me. I hear what you are saying...and agree with it. Trouble is, my hands are tied - I can only work with what I have got! Appreciate your thoughts on the matter. – user2318704 Apr 12 '17 at 10:58

1 Answers1

1

what @Kayaman said :)

What are the time requirements? Do you need to execute all 230 successfully with in X seconds? What about webserver do you control the default timeouts? Do all requests need to result in a 200? What happens if a single request fails? Do you have to retry until it succeeds? Do you have to invalidate all the other requests if some percentage fails? What about backoffs?

If you can't do the requests in serial you're left with some sort of concurrent code. Concurrent code is more difficult than synchronous code. There are so many more code path variants to have to reason about, synchronized memory access or w/e.

If you HAVE to do the requests in the context of a web request, it's generally a good idea to limit concurrency (thread pool) to a set amount.

If there is hardcoded 230 that is a set amount, but still may be too large. If this is a publicly available endpoint there is nothing stopping someone from launching 10,000 concurrent requests against your server, and if you can service all those requests that is 2,300,000 concurrent requests against your 230 URLS!!!!!!!! Because of this all resources should have some sort of sane bound. If you pull the urls from a db and an arbitrary user may add urls that's unbounded and not good.

One easy way to do this is to limit concurrency by using a threadpool.

The architecture for this could consist of a bounded thread pool and a Queue. When each web request comes in it would enqueue URLS and the thread pool could process them. If you need return values you could have a return value Queue. What I like about this is that the producer (web request handler) and the consumers (thread pool) are both written in a synchronous style and concurrency is achieved by the runtime by executing the fetchers on a thread pool.

Kayaman touched on a way commonly used to address this: taking long running processes out of the context of a web request. This architecture could look a lot like the internal thread pool and queue but would be interprocess. The queue would be an external process job/message queue, and the consumers would pull from that. Then the web request would fire of 230 messages and return to the client. And asynchronously consumers would be continually pulling from the queue and make the requests :)

dm03514
  • 54,664
  • 18
  • 108
  • 145
  • Wow...thanks for such a detailed answer, dm03514! As mentioned in an earlier comment, I am very constrained with what I can do...sadly, I cannot use a queue. But I do like the idea of looking at the webserver and perhaps doing something with the default timeout. – user2318704 Apr 12 '17 at 15:38
  • You should be able to use a queue internally, in the context of your web request. http://stackoverflow.com/a/2332581/594589 – dm03514 Apr 12 '17 at 15:42