2

I am using GCS standard bucket. If I am writing 100 GB of data from multiple processes and deleting after fetching it for once, how does the storage cost work ? Let's say I have 5 processes that writes 100 GB each and deletes them after using the data once. At a given point in time , max bucket size will be 250 for 30 mins in 24 hours. How is the pricing work in this situation ?

Ram
  • 61
  • 4

2 Answers2

2

According to the pricing tables, it looks like you would just get charged for:

  • operations (i.e. storage.bucklet.list, etc.), which would be depend on how many operations your processes are calling for that 100GB of data
  • any inter-region replication (unless you have just a regional bucket, replication costs in dual/multi-region buckets would be @ $0.02/GB, so $100GB would cost you $2)
  • egress costs (which varies depending on your use case and where it egresses to)
Glen Yu
  • 698
  • 1
  • 3
  • 9
1

For a standard bucket, its prorated. So if we have data of 100GB for 1 hour. We will be charged 100 * 0.026 * 1/960. Refer to https://cloud.google.com/storage/pricing-examples#prorate

Ram
  • 61
  • 4
  • Your answer could be improved with additional supporting information. Please [edit] to add further details, such as citations or documentation, so that others can confirm that your answer is correct. You can find more information on how to write good answers [in the help center](/help/how-to-answer). – Community Apr 27 '23 at 22:18