1

I have the following WebAPI 2 method:

public HttpResponseMessage ProcessData([FromBody]ProcessDataRequestModel model)
        {
            var response = new JsonResponse();

            if (model != null)
            {
                // checks if there are old records to process
                var records = _utilityRepo.GetOldProcesses(model.ProcessUid);

                if (records.Count > 0)
                {
                    // there is an active process

                    // insert the new process
                    _utilityRepo.InsertNewProcess(records[0].ProcessUid);

                    response.message = "Process added to ProcessUid: " + records[0].ProcessUid.ToString();
                }
                else
                {
                    // if this is a new process then do adjustments rules
                    var settings = _utilityRepo.GetSettings(model.Uid);

                    // create a new process
                    var newUid = Guid.NewGuid();

                    // if its a new adjustment 
                    if (records.AdjustmentUid == null)
                    {
                        records.AdjustmentUid  = Guid.NewGuid();

                        // create new Adjustment information
                        _utilityRepo.CreateNewAdjustment(records.AdjustmentUid.Value);
                    }

                    // if adjustment created
                    if (_utilityRepo.CreateNewProcess(newUid))
                    {
                        // insert the new body
                        _utilityRepo.InsertNewBody(newUid, model.Body, true);
                    }

                    // start AWS lambda function timer
                    _utilityRepo.AWSStartTimer();

                    response.message = "Process created";

                }

                response.success = true;
                response.data = null;
            }

            return Request.CreateResponse(response);
        }

The above method sometimes can take from 3-4 seconds to process (some db calls and other calculations) and I don't want the user to wait until all the executions are done.

I would like the user hit the web api method and almost inmediatly get a success response, meanwhile the server is finishing all the executions.

Any clue on how to implement Async / Await to achieve this?

VAAA
  • 14,531
  • 28
  • 130
  • 253
  • Do not do that. But if you must, [here](https://stackoverflow.com/a/18509424/4228458) is how. – CodingYoshi Mar 08 '18 at 02:51
  • What do you recommend me then CodingYoshi? Thanks – VAAA Mar 08 '18 at 02:52
  • Well I would just show a progress bar and ask the user to wait. 3 to 4 seconds is not long. But if you think it is long, perhaps look into optimizing it. What if it fails and you have told the user "it passed! All is well!"? – CodingYoshi Mar 08 '18 at 02:55
  • This is a call from a third party application so there is no UI actually. I was worry that if I keep the connection open between the third party app and the API and in case I have many more calls then i will have a queue list of calls waiting to be processed – VAAA Mar 08 '18 at 02:56
  • Then in that case use async/await so you do not hold any threads busy so they can serve other requests. But even with async/await the work will still take 3-4 seconds. The only diff is that the threadpool thread will not be waiting around for the db to return back. It will server other requests in the meantime. Having said that, you have to be sure the work is actually really asynchronous. The above link I provided, read the answerer's blog because he has good info on what is really asynchronous and what is not. – CodingYoshi Mar 08 '18 at 02:59

1 Answers1

0

If you don't need to return a meaningful response it's a piece of cake. Wrap your method body in a lambda you pass to Task.Run (which returns a Task). No need to use await or async. You just don't await the Task and the endpoint will return immediately.

However if you need to return a response that depends on the outcome of the operation, you'll need some kind of reporting mechanism in place, SignalR for example.

Edit: Based on the comments to the original post, my recommendation would be to wrap the code in await Task.Run(()=>...), i.e., indeed await it before returning. That will allow the long-ish process to run on a different thread asynchronously, but the response will still await the outcome rather than leaving the user in the dark about whether it finished (since you have no control over the UI). You'd have to test it though to see if there's really any performance benefit from doing this. I'm skeptical it'll make much difference.

2020-02-14 Edit: Hooray, my answer's votes are no longer in the negative! I figured having had the benefit of two more years of experience I would share some new observations on this topic.

There's no question that asynchronous background operations running in a web server is a complex topic. But as with most things, there's a naive way of doing it, a "good enough for 99% of cases" way of doing it, and a "people will die (or worse, get sued) if we do it wrong" way of doing it. Things need to be put in perspective.

My original answer may have been a little naive, but to be fair the OP was talking about an API that was only taking a few seconds to finish, and all he wanted to do was save the user from having to wait for it to return. I also noted that the user would not get any report of progress or completion if it is done this way. If it were me, I'd say the user should suck it up for that short of a time. Alternatively, there's nothing that says the client has to wait for the API response before returning control to the user.

But regardless, if you really want to get that 200 right away JUST to acknowledge that the task was initiated successfully, then I still maintain that a simple Task.Run(()=>...) without the await is probably fine in this case. Unless there are truly severe consequences to the user not knowing the API failed, on the off chance that the app pool was recycled or the server restarted during those exact 4 seconds between the API return and its true completion, the user will just be ignorant of the failure and will presumably find out next time they go into the application. Just make sure that your DB operations are transactional so you don't end up in a partial success situation.

Then there's the "good enough for 99% of cases" way, which is what I do in my application. I have a "Job" system which is asynchronous, but not reentrant. When a job is initiated, we do a Task.Run and begin to execute it. The code in the task always holds onto a Job data structure whose ID is returned immediately by the API. The code in the task periodically updates the Job data with status, which is also saved to a database, and checks to see if the Job was cancelled by the user, in which case it wraps up immediately and the DB transaction is rolled back. The user cancels by calling another API which updates said Job object in the database to indicate it should be cancelled. A separate infinite loop periodically polls the job database server side and updates the in-memory Job objects used by the actual running code with any cancellation requests. Fundamentally it's just like any CancellationToken in .NET but it just works via a database and API calls. The front end can periodically poll the server for job status using the ID, or better yet, if they have WebSockets the server pushes job updates using SignalR.

So, what happens if the app domain is lost during the job? Well, first off, every job runs in a single DB transaction, so if it doesn't complete the DB rolls back. Second, when the ASP.NET app restarts, one of the first things it does is check for any jobs that are still marked as running in the DB. These are the zombies that died upon app pool restart but the DB still thinks they're alive. So we mark them as KIA, and send the user an email indicating their job failed and needs to be rerun. Sometimes it causes inconvenience and a puzzled user from time to time, but it works fine 99% of the time. Theoretically, we could even automatically restart the job on server startup if we wanted to, but we feel it's better to make that a manual process for a number of case-specific reasons.

Finally, there's the "people will die (or worse, get sued) if we get it wrong" way. This is what some of the other comments are more directed to. This is where have to break down all jobs into small atomic transactions that are tracked in a database at every step, and which can be picked up by any server (the same or maybe another server in a farm) at any time. If it's really top notch, multiple servers can even work on the same job concurrently, depending on what it is. It requires carefully coding every background operation with this in mind, constantly updating a database with your progress, dealing with concurrent changes to the database (because now the entire operation is no longer a single atomic transaction), etc. Needless to say, it's a LOT of effort. Yeah, it would be great if it worked this way. It would be great if every app did everything to this level of perfection. I also want a toilet made out of solid gold, but it's just not in the cards now is it?

So my $0.02 is, again, let's have some perspective. Do the cost benefit analysis and unless you're doing something where lives or lots of money is at stake, aim for what works perfectly well 99%+ of the time and only causes minor inconvenience when it doesn't work perfectly.

Emperor Eto
  • 2,456
  • 2
  • 18
  • 32
  • Where is this "piece of cake" code you are speaking of? Care to add that to your answer? – CodingYoshi Mar 08 '18 at 02:52
  • I read the blog post you recommended. Leaving aside the melodrama (who writes an entire article telling people how to do something they think is always a bad idea?) using Task.Run is perfectly acceptable if you have a means of informing the user - such as SignalR - of the status. If the server does shut down or lose the appdomain then the "job" will remain incomplete and the user will know it because they'll never get the confirmation. – Emperor Eto Mar 08 '18 at 03:18
  • With SignalR the client has to subscribe and accept messages but the OP is dealing with a third party. He cannot tell them to use SignalR and subscribe to his messages. – CodingYoshi Mar 08 '18 at 04:30
  • @PeterMoore, in ASP.NET, what guarantee do you have that that code will run to the end? – Paulo Morgado Mar 12 '18 at 21:18
  • @PauloMorgado good question - you don't have any. Just like you have no guarantee that any API call will finish. The difference is what the user is told about whether/how it finishes. So if you implement asynchronous processes on the server - whether you use Task.Run or something more complex you need a way of communicating status to the user asynchronously and handling failures / incomplete operations. And needless to say, db activity should always be transactional so it can be rolled back if there is a failure/incomplete op. – Emperor Eto Mar 14 '18 at 17:46