6

S3 FAQ mentions that "Amazon S3 buckets in all Regions provide read-after-write consistency for PUTS of new objects and eventual consistency for overwrite PUTS and DELETES." However, I don't know how long it takes to get eventual consistency. I tried to search for this but couldn't find an answer in S3 documentation.

Situation:

We have a website consists of 7 steps. When user clicks on save in each step, we want to save a json document (contains information of all 7 steps) to Amazon S3. Currently we plan to:

  1. Create a single S3 bucket to store all json documents.
  2. When user saves step 1 we create a new item in S3.
  3. When user saves step 2-7 we override the existing item.
  4. After user saves a step and refresh the page, he should be able to see the information he just saved. i.e. We want to make sure that we always read after write.

The full json document (all 7 steps completed) is around 20 KB. After users clicked on save button we can freeze the page for some time and they cannot make other changes until save is finished.

Question:

  1. How long does it take for AWS S3 to save and load an item? (We can freeze our website when document is being saved to S3)
  2. Is there a function to calculate save/load time based on item size?
  3. Is the save/load time gonna be different if I choose another S3 region? If so which is the best region for Seattle?
EV3
  • 95
  • 1
  • 1
  • 6

2 Answers2

14

I wanted to add to @error2007s answers.

How long does it take for AWS S3 to save and load an item? (We can freeze our website when document is being saved to S3)

It's not only that you will not find the exact time anywhere - there's actually no such thing exact time. That's just what "eventual consistency" is all about: consistency will be achieved eventually. You can't know when.

If somebody gave you an upper bound for how long a system would take to achieve consistency, then you wouldn't call it "eventually consistent" anymore. It would be "consistent within X amount of time".


The problem now becomes, "How do I deal with eventual consistency?" (instead of trying to "beat it")

To really find the answer to that question, you need to first understand what kind of consistency you truly need, and how exactly the eventual consistency of S3 could affect your workflow.

Based on your description, I understand that you would write a total of 7 times to S3, once for each step you have. For the first write, as you correctly cited the FAQs, you get strong consistency for any reads after that. For all the subsequent writes (which are really "replacing" the original object), you might observe eventual consistency - that is, if you try to read the overwritten object, you might get the most recent version, or you might get an older version. This is what is referred to as "eventual consistency" on S3 in this scenario.

A few alternatives for you to consider:

  • don't write to S3 on every single step; instead, keep the data for each step on the client side, and then only write 1 single object to S3 after the 7th step. This way, there's only 1 write, no "overwrites", so no "eventual consistency". This might or might not be possible for your specific scenario, you need to evaluate that.

  • alternatively, write to S3 objects with different names for each step. E.g., something like: after step 1, save that to bruno-preferences-step-1.json; then, after step 2, save the results to bruno-preferences-step-2.json; and so on, then save the final preferences file to bruno-preferences.json, or maybe even bruno-preferences-step-7.json, giving yourself the flexibility to add more steps in the future. Note that the idea here to avoid overwrites, which could cause eventual consistency issues. Using this approach, you only write new objects, you never overwrite them.

  • finally, you might want to consider Amazon DynamoDB. It's a NoSQL database, you can securely connect to it directly from the browser or from your server. It provides you with replication, automatic scaling, load distribution (just like S3). And you also have the option to tell DynamoDB that you want to perform strongly consistent reads (the default is eventually consistent reads; you have to change a parameter to get strongly consistent reads). DynamoDB is typically used for "small" records, 20kB is definitely within the range -- the maximum size of a record would be 400kB as of today. You might want to check this out: DynamoDB FAQs: What is the consistency model of Amazon DynamoDB?

Bruno Reis
  • 37,201
  • 11
  • 119
  • 156
  • Thanks for the alternative solutions. Do we know how long it takes to upload a 20kb file to S3 the first time? If that doesn't take much time, would it make sense to create a new file each time and delete the old one (i.e. no update to existing files)? – EV3 Jun 07 '16 at 16:36
  • @EV3 about the upload time, I would definitely recommend that you do a simple benchmark -- it would be strongly related to where your upload originates from. If it's a server running inside AWS uploading to S3, it would likely be blazing fast; if it's a browser uploading to S3, it could depend on a lot of other factors (how good/fast/stable is the client's internet connection? what's the latency between client and S3? etc). I would definitely investigate that further - based on my experience, it's likely that 20kB uploads to S3 will give you good enough performance! – Bruno Reis Jun 07 '16 at 17:34
  • Thanks for your advice about DynamoDB. My final design is to create a new S3 document when user saves a step, save this S3 location to dynamoDB (and keep a mapping of document identifier and document S3 location), remove the old S3 document to free up space. – EV3 Jun 10 '16 at 21:07
  • @EV3: that sounds like a very good design! That approach (S3 + "pointers" in DynamoDB) is a common scenario when you need stronger consistency than what S3 alone can give you, and you need to store "larger" blobs (a few kB). One final suggestion: since you are going to keep updating your mapping in DynamoDB, you need to make sure that there are no data races - for this, you can use the Consistent Reads on DynamoDB + conditional updates. Eg, (consistently) read mapping for EV3 and a current "version" number, create new S3 object, save back to DDB the new S3 key and version++ if version = old. – Bruno Reis Jun 10 '16 at 22:49
  • Is there a flag in S3 object metadata to figure out if the object reached consistent state after the last update? – Jay Kumar Jul 10 '18 at 11:11
  • 1
    @JayKumar - no, there's no such flag. In CS terms, there can't be such a flag - the existence of such a flag would imply that the system is strongly consistent, violating the hypothesis of eventual consistency. – Bruno Reis Jul 17 '18 at 17:47
  • Starting from Dec 2020 S3 apparently delivers "Strong Read-After-Write Consistency" I would try to run tests again. – Larytet Feb 07 '21 at 07:30
3

How long does it take for AWS S3 to save and load an item? (We can freeze our website when document is being saved to S3)

You will not find the exact time anywhere. If you ask AWS they will give you approx timings. Your file is 20 KB so as per my experience from S3 usage the time will be more or less 60-90 Sec.

Is there a function to calculate save/load time based on item size?

No there is no any function using which you can calculate this.

Is the save/load time gonna be different if I choose another S3 region? If so which is the best region for Seattle?

For Seattle US West Oregon Will work with no problem.

You can also take a look at this experiment for comparison https://github.com/andrewgaul/are-we-consistent-yet

Piyush Patil
  • 14,512
  • 6
  • 35
  • 54