I've been experimenting with some old code that needs refactoring in places and was testing if there was any improvement to iis threads etc by uploading file asynchronously (Server Side). Using jQuery file upload client side.
The original code
[HttpPost]
public ActionResult UploadDocument( HttpPostedFileBase uploadedFile ) {
// Do any validation here
// Read bytes from http input stream into fileData
Byte[] fileData;
using ( BinaryReader binaryReader =
new BinaryReader( uploadedFile.InputStream ) ) {
fileData = binaryReader.ReadBytes( uploadedFile.ContentLength );
}
// Create a new Postgres bytea File Blob ** NOT Async **
_fileService.CreateFile( fileData );
return Json(
new {
ReturnStatus = "SUCCESS" // Or whatever
}
);
}
The new code
[HttpPost]
public async Task<ActionResult> UploadDocumentAsync( HttpPostedFileBase uploadedFile ) {
// Do any validation here
// Read bytes from http input stream into fileData
Byte[] fileData = new Byte[uploadedFile.ContentLength];
await uploadedFile.InputStream.ReadAsync( fileData, 0, uploadedFile.ContentLength );
// Create a new Postgres bytea File Blob ** NOT Async **
_fileService.CreateFile( fileData );
return Json(
new {
ReturnStatus = "SUCCESS" // Or whatever
}
);
}
The new method appears to work correctly but my question is:
Is the following code the correct (best) way to do it? and are there any gotchas doing it this way? There is a lot of contradictory and out of date information out there. There also seems to be a lot of debate on whether there is any improvement or point in actually doing this. Yes it give back threads to iis etc but is it worth the overhead type of debate.
The code in question
// Read bytes from http input stream into fileData
Byte[] fileData = new Byte[uploadedFile.ContentLength];
await uploadedFile.InputStream.ReadAsync( fileData, 0, uploadedFile.ContentLength );