0

I have a web api which allows the user to retrieve data real time and by batch. The problem however is that I have a data which has over 30 million of records which causes a bottleneck in a by batch request. I use paging in my get method which returns a default of 10 records per api request but still the 10 records take time in retrieving because of the bulk pull of data.

Here is the sample of my get method :

 public async Task<IHttpActionResult> Get(int pageno = 1, int pagesize = 10)
    {
        int skip = (pageno - 1) * pagesize;
        int total = db.webapi_customer_charges.Count();
        var cc = await db.webapi_customer_charges
            .OrderBy(c => c.hospital_number)
            .Skip(skip)
            .Take(pagesize)
            .ToListAsync();

        return Ok(new Paging<webapi_customer_charges>(cc, pageno, pagesize, total));
    }

Is there a way or workaround or like best practice when it comes to retrieval of huge amount of data ? Thank you.

Qwerty
  • 489
  • 1
  • 4
  • 13

1 Answers1

2

Not sure how you can do it with EF, since I assume it retrieves the whole set before you can skip or take, but I think it would be faster if you called a stored proc on Sql Server side and just pass in the min and max row numbers. That way you would only get the amount of data you need from the server on each call.

Here is a link to how you can call a stored proc with EF. If you don't find a better solution give it try it should be quite simple to make a select and use the ROW_NUMBER() in Sql to filter based on your input params.

Community
  • 1
  • 1
Raul A.
  • 333
  • 4
  • 10