what @Kayaman said :)
What are the time requirements? Do you need to execute all 230 successfully with in X seconds? What about webserver do you control the default timeouts? Do all requests need to result in a 200
? What happens if a single request fails? Do you have to retry until it succeeds? Do you have to invalidate all the other requests if some percentage fails? What about backoffs?
If you can't do the requests in serial you're left with some sort of concurrent code. Concurrent code is more difficult than synchronous code. There are so many more code path variants to have to reason about, synchronized memory access or w/e.
If you HAVE to do the requests in the context of a web request, it's generally a good idea to limit concurrency (thread pool) to a set amount.
If there is hardcoded 230 that is a set amount, but still may be too large. If this is a publicly available endpoint there is nothing stopping someone from launching 10,000 concurrent requests against your server, and if you can service all those requests that is 2,300,000 concurrent requests against your 230 URLS!!!!!!!! Because of this all resources should have some sort of sane bound. If you pull the urls from a db and an arbitrary user may add urls that's unbounded and not good.
One easy way to do this is to limit concurrency by using a threadpool.
The architecture for this could consist of a bounded thread pool and a Queue. When each web request comes in it would enqueue URLS and the thread pool could process them. If you need return values you could have a return value Queue. What I like about this is that the producer (web request handler) and the consumers (thread pool) are both written in a synchronous style and concurrency is achieved by the runtime by executing the fetchers on a thread pool.
Kayaman touched on a way commonly used to address this: taking long running processes out of the context of a web request. This architecture could look a lot like the internal thread pool and queue but would be interprocess. The queue would be an external process job/message queue, and the consumers would pull from that. Then the web request would fire of 230 messages and return to the client. And asynchronously consumers would be continually pulling from the queue and make the requests :)