31

I don't know how to explain this well, but I will try.

  • I use Google Cloud SQL second edition with 20 GB disk size.
  • I have several wp. databases with 166.5 MB Total size.

And right now my Storage usage is 9.52 GB! (With 166.5 MB SQL data...) and increasing still going faster...

What to do?!

gcloud-sql-problem-increasing-size

enter image description here

UPDATE :

I solve this with :

  • I made an export in bucket
  • I created a new instance Cloud SQL
  • Import from bucket
  • And delete instance with problem.

(And changed ip from my applications)

I don't know sure where problem come from but could be a "Storage overhead from binary logs".

Next time will check binary logs with : mysql> SHOW BINARY LOGS;

What I think Google is missing is a purge binary logs! (an easy way!)

UPDATE FINAL :

With binary logs active, storage of your cloud SQL will expand continuously.

For anyone in the same situation, you can edit the instance and uncheck binary logs, after that the current binary logs will purge.

Sorry for my noob problem! :D (I'm a beginner in Server administration.)

Thanks Vadim!

marc_s
  • 732,580
  • 175
  • 1,330
  • 1,459
Alexcsandru
  • 387
  • 3
  • 9
  • There is really no way at all we could help with this. We have no way at all of knowing what your code is doing. You probably should contact Google. – durbnpoisn Jan 29 '16 at 21:26
  • Ok, I will do this. thanks, but what I don't understand is where is size from, how to see that files... – Alexcsandru Jan 29 '16 at 21:29
  • Considering it's cloud based, I don't think there are any real "files" to speak of. In any case, something as simple is forgetting to close a connection could cause your database to increase in size. You've got to look at your code. – durbnpoisn Jan 29 '16 at 21:45
  • 1
    Do you have binary logs enabled? Can you show us what other flags you have enabled? For example, do you have the mysql general log enabled? – Vadim Jan 29 '16 at 21:45
  • 1
    You can use "SHOW BINARY LOGS;" to see how much space is used by the binary logs. – Vadim Jan 29 '16 at 22:24
  • no flag, and I have checked "Enable binary logging (for point-in-time recovery and replication)". – Alexcsandru Jan 30 '16 at 00:22
  • 1
    I don't know where to run "SHOW BINARY LOGS", in shell? I manage my database with heidiSQL... After some reasearch could be "Storage overhead from binary logs" – Alexcsandru Jan 30 '16 at 00:27
  • If you don't need binary logs, disabling them will purge existing binary logs. – Vadim Jan 30 '16 at 00:43
  • Yes, you're right! Thank you Vadmin! – Alexcsandru Jan 30 '16 at 01:03

2 Answers2

17

If you have binary logs enabled, mysql will make a record of all changes, which is required for replication or point-in-time recovery.

If you have no need for these features, you can disable binary logs which will purge any existing logs from your instance.

If binary logs are enabled, they will not grow indefinitely. Binary logs older than the oldest automatic backup (7 days) are purged automatically.

Vadim
  • 4,996
  • 1
  • 26
  • 30
  • In google cloud sql second generation, point in time recovery is not active. And I don't know why suddenly size is increasing without any changes in settings. (Maybe Google still work on that) And sometime increasing must be purget, but when? In print screen you can see that is just increasing in size. They must specific a minimum size of disk when you check binary logs and how that it works with increasing in size and maybe purget after x days. – Alexcsandru Jan 30 '16 at 11:59
  • Binary log growth depends on your write queries. Did you start writing more data to the database around that time? Did you receive more traffic around that time? – Vadim Jan 30 '16 at 20:13
  • Traffic increasing around 40%, but nothing that big anyway... So, the growth was too big and sudden! I Think is something that Google maybe work on it (a bug maybe). I think is a bug because point in time is not currently active in second generation and maybe that isn't ready yet. – Alexcsandru Jan 31 '16 at 01:15
  • I am an engineer on Cloud SQL and I'm not aware of any reports of issues relating to binary logs. Binary log growth is related to how much data is changed in a statement, not necessarily the number of statements executed. For example, executing a single update statement that touches many rows in a large table will make an entry for every row that is affected. – Vadim Feb 01 '16 at 03:35
  • @Vadim So why is there no finer control over the binary logs than just on/off? Given the huge amount of space they may take up, it ought to be possible to automatically purge binary logs after X days. `They will be purged after some time` is much too vague, it means that as the binary logs keep growing in size, it is impossible to estimate whether the space they will eventually take up is worth the storage cost, especially since it is impossible to revert any increases in storage size. – oulenz Sep 26 '16 at 14:09
  • 2
    I've updated my answer to be more specific. Binary logs are purged as older backups roll off. – Vadim Sep 26 '16 at 17:09
  • 3
    @vadim, I'd also tell users how to disable the feature in your answer. For those interested you have two options using the cloud console: **1.** Go to https://console.cloud.google.com/sql/instances?project= and select the database instance **2a)** select `edit` in the top action bar, expand `enable auto backups` and uncheck `enable binary logging`, save your changes **2b)** open the `backups tab`, click `manage automated backups`, uncheck `enable binary logging`, save. Keep in mind that these changes require a restart of your database instance! – simne7 Oct 11 '18 at 13:53
0

I had the same issue with our GCP SQL instance. In my case the storage went from 10GB that we first configured to 728GB (with an increase in billing accordingly).

In my case the culprit was archived WAL Logs:

Metrics explorer

Just viewing the current log size did not show this because the logs had already been cleared.

The issue is that GCP has a hidden option:

Automatically increase storage option for GCP SQL

This is checked by default (IIRC). So the storage will increase automatically in events of high log output (in my case this happened when upgrading Postgresql server version.

The catch is that once increased, the storage cannot be decreased, ending up with increased monthly operational costs.