0

I want to send 99999999 get requests and print the status code fast I wrote this struct to handle each request:

 struct node{
        std::string api;
        bool done;
        node() = default;
        node(std::string const &a) : api(a) ,
        done(false){}
        void work(){
            std::thread([&]{
            auto f = cpr::GetAsync(cpr::Url{api.c_str()}, cpr::Header{"headers"}, cpr::Cookies{"cookies"});
            std::cout<<f.status_code<<'\n';
            done=true;
            }).detach();
        }
    };

Then I created this array node n[215]; , a loop will create the urls used in requests:

//build url string 
for(int i=0; i<215; ++i){
//another code build the api here after that i make the object with the url
node x(api);
n[i]=std::move(x);
}

after that i call this function to start each request:

void init(node*x){
        for(unsigned i=0; i<256;++i){
            x[i].work();
        }
}



// in main
std::thread th(init, n);
th.join();

when i done with the array node n[215] i do the same thing with the n store 215 object and the work() function for each object until 99999999 but it takes a lot of time, how to done with 999999999 fast, Any suggestions please, thanks

john
  • 21
  • 3
  • 1
    To send `99999999` of most things will usually take some time. You probably need to invest in lots of hardware. – drescherjm May 26 '22 at 15:15
  • [How many Requests per Minute are considered 'Heavy Load'? (Approximation)](https://stackoverflow.com/questions/1319965/how-many-requests-per-minute-are-considered-heavy-load-approximation) – crashmstr May 26 '22 at 15:15
  • 1
    If you scale up to a lot more instances of your server and have good load balancing... I don't think you want to pay for that. – crashmstr May 26 '22 at 15:16
  • @drescherjm it takes days for me any solution – john May 26 '22 at 15:16
  • 1
    Redesign your server so you don't need that many requests. – crashmstr May 26 '22 at 15:16
  • 1
    This probably will involve purchasing or renting lots of rack mounted servers and expensive network hardware to connect them. – drescherjm May 26 '22 at 15:16
  • 1
    Start with 10 requests. Then 100. Then 1000. Then 10,000. Pay attention to when the system becomes sluggish. At some point, you'll want a *front end processor* to delegate the requests to a farm of back-end servers to handle those requests. – Eljay May 26 '22 at 15:21
  • @john Hate to be honest, but from the comments, you should have been familiar with the issues involved and various ways to mitigate these issues (unless you were brought into the project you're working on with very little background in this topic). – PaulMcKenzie May 26 '22 at 15:30
  • Don't start more threads than you have cores. Make one thread per core and have it handle a bunch of requests. Use async IO to handle multiple requests in a single thread. You probably don't even need to start any threads since the CPU is usually not the bottleneck on sending requests. A single core can saturate your bandwidth usually. – Goswin von Brederlow May 26 '22 at 15:31
  • @PaulMcKenzie What are the problems involved and the different ways to mitigate these problems? – john May 26 '22 at 15:33
  • @john Read the comments above. – PaulMcKenzie May 26 '22 at 15:34
  • I think @Eljay has a very good plan for how to proceed. – drescherjm May 26 '22 at 15:38
  • @drescherjm I use the hacked neighbor's wifi only with 8GB RAM computer – john May 26 '22 at 15:38
  • Then I expect you will have to adjust your expectations. `99999999` is likely too large. – drescherjm May 26 '22 at 15:38
  • @drescherjm any solutions – john May 26 '22 at 15:40
  • Just do as @Eljay said. Try 10, then 100 ... Forget about 99999999 – drescherjm May 26 '22 at 15:42
  • 1
    When I was a kid we has 8K RAM and we were damn glad to have that much. – user4581301 May 26 '22 at 15:52
  • @user4581301 Where is the ram now? – john May 26 '22 at 16:52
  • @Eljay what is the benefit for sending 10 requests. Then 100. Then 1000. Then 10,000?? – john May 26 '22 at 16:54
  • 2
    You can **measure** the performance and assess when the performance substantially falls off. That's the point where you'll want to use a system to federate the requests for processing across multiple machines. Google, for example, can handle millions of requests, and they federate those across tens of thousands of machines in various data centers around the world. Your use case will require a similar scale solution. – Eljay May 26 '22 at 16:56
  • The RAM is likely in some landfill at this point. We weren't so careful with electronics disposal back in the '70s and '80s. – user4581301 May 26 '22 at 16:58
  • What you want is going to be hard and possibly expensive. If you can demonstrate that the system fails to meet the requirements at 100000 requests, you've learned a lot for, probably, a lot less expense. – user4581301 May 26 '22 at 17:05
  • @Eljay amazing, but i need external servers – john May 26 '22 at 17:06
  • @user4581301 my code never fail but it takes long time – john May 26 '22 at 17:07
  • @user4581301 how can i solve this problem with 8K ram – john May 26 '22 at 17:08
  • Posting 100M requests with 8K of RAM will be quite a feat. – Eljay May 26 '22 at 17:09
  • The 8K RAM bit was an old man rant in response to your comment about using a PC with 8GB RAM. Similar to "Bah. I walked 80 miles to school every day. Through 50 feet of snow. Uphill. Both ways." I can't say I hacked my neighbour's router with that ancient computer, but the fact that no one had routers back then could be a contributing factor. Besides, we were all too busy trying to break into Joshua to play Global Thermonuclear War. – user4581301 May 26 '22 at 17:16

0 Answers0