I have a web api which allows the user to retrieve data real time and by batch. The problem however is that I have a data which has over 30 million of records which causes a bottleneck in a by batch request. I use paging in my get method which returns a default of 10 records per api request but still the 10 records take time in retrieving because of the bulk pull of data.
Here is the sample of my get method :
public async Task<IHttpActionResult> Get(int pageno = 1, int pagesize = 10)
{
int skip = (pageno - 1) * pagesize;
int total = db.webapi_customer_charges.Count();
var cc = await db.webapi_customer_charges
.OrderBy(c => c.hospital_number)
.Skip(skip)
.Take(pagesize)
.ToListAsync();
return Ok(new Paging<webapi_customer_charges>(cc, pageno, pagesize, total));
}
Is there a way or workaround or like best practice when it comes to retrieval of huge amount of data ? Thank you.