1

I have a JavaScript application that regularly saves new and updated data. However I need it to work on slow connection as well.

Data is submitted in one single HTTP POST request. The response will return newly inserted ids for newly created records.

What I'm finding is that data submitted is fully saved, however sometimes the return result times out. The browser application therefore does not know the data has been submitted successfully and will try to save it again.

I know I can detect the timeout in the browser, but how can I make sure the data is saved correctly?

What are some good methods of handling this case?

I see from here https://dba.stackexchange.com/a/94309/2599 that I could include a pending state:

  • Get transaction number from server
  • send data, gets saved as pending on server
  • if pending transaction already exists, do not overwrite data, but send same results back
  • if success received, commit pending transaction
  • if error back, retry later
  • if timeout, retry later

However I'm looking for a simpler solution?

Community
  • 1
  • 1
jdog
  • 2,465
  • 6
  • 40
  • 74
  • possible duplicate of [Timeout XMLHttpRequest](http://stackoverflow.com/questions/1523686/timeout-xmlhttprequest) – Madhurendra Sachan Aug 21 '15 at 00:29
  • I see how I can detect a timeout in the server, but the client still does not know if the data has been saved without error or if the request just times out. – jdog Aug 21 '15 at 00:32
  • The server can detect a duplicate save request coming from the same client and just return some sort of "already-saved" code to the client. The server can be fixed to lessen the chance that the data is saved, but the response is not sent back to the client (particularly if the response is relatively small). – jfriend00 Aug 21 '15 at 00:36
  • Thanks @jfriend00, waiting for any alternative answers, but I can see how that would work. – jdog Aug 21 '15 at 00:37
  • What you can do is increase the timeout time if request fails It is a fail. Handling timeout as success is a bad practice. Moreover, It all depends on data you are sending, if data you are sending is very small you can retry else try saving response to a temp. storage and on next request return the response. it will be more resource consuming method. – Madhurendra Sachan Aug 21 '15 at 00:38
  • When the client gets no response from a request, before retrying, it can query the server or ask the server somehow it the previous save request got processed. The client can coin a unique save-request-id that goes with each save request and if the server sees the same id on multiple requests, then the server knows that the client thinks this is still the same operation. – jfriend00 Aug 21 '15 at 00:38
  • What programming language you are using in server side? – Madhurendra Sachan Aug 21 '15 at 00:41
  • @Anonymous Symfony/ PHP – jdog Aug 21 '15 at 00:42
  • @jdog _"What I'm finding is that data submitted is fully saved, however sometimes the return result times out. The browser application therefore does not know the data has been submitted successfully and will try to save it again."_ Tried not calling ajax again until response returned ? – guest271314 Aug 21 '15 at 01:01
  • @guest271314 I know the response times out, so that won't help. The user base is not computer literate enough to handle this case. It needs to submit to server until we know it is saved. – jdog Aug 21 '15 at 01:06

3 Answers3

1

Really, it seems you need to get to the bottom of why the client thinks the data was not saved, but it actually was. If the issue is purely one of timing, then perhaps a client timeout just needs to be lengthened so it doesn't give up too soon or the amount of data you're sending back in the response needs to be reduced so the response comes back quicker on a slow link.

But, if you can't get rid of the problem that way, there are a bunch of possibilities to program around the issue:

  1. The server can keep track of the last save request from each client (or a hash of such request) and if it sees a duplicate save request come in from the same client, then it can simply return something like "already-saved".

  2. The code flow in the server can be modified so that a small response is sent back to the client immediately after the database operation has committed (no delays for any other types of back-end operations), thus lessening the chance that the client would timeout after the data has been saved.

  3. The client can coin a unique ID for each save request and if the server sees the same saveID being used on multiple requests, then it can know that the client thinks it is just trying to save this data again.

  4. After any type of failure, before retrying, the client can query the server to see if the previous save attempt succeeded or failed.

jfriend00
  • 683,504
  • 96
  • 985
  • 979
  • Thanks again, I have implemented your suggestion number 1. The application is supposed to work in both poor reception areas and also offline. Therefore a longer timeout will not necessarily help. – jdog Aug 23 '15 at 00:14
0

You can have multiple retries count as a simple global int. You can also automatically retry, but this isn't good for an auto save app. A third option is use the auto-save plugins for jQuery.

Dmitry Sadakov
  • 2,128
  • 3
  • 19
  • 34
0

Few suggestions

  • Increase the time out, don't handle timeout as success.
  • You can flush output of each record as soon as you get using ob_flush and flush.
  • Since you are making request in regular interval. Check for connection_aborted method on each API call, if client has disconnected you can save the response in temp file and on next request you can append the last response with new response but this method is more resource consuming.