1

On the Log Explorer this is the log entry where it failed.

{
  "textPayload": "2022-10-17 14:43:12.896 UTC [219890]: [1-1] db=xxx,user=datatstream_test ERROR:  replication slot \"datastream_replication_slot_test\" is active for PID 219872",
  "insertId": "...",
  "resource": {
    "type": "cloudsql_database",
    "labels": {
      "database_id": "xxx-yyy-zzz:xxx-zzz-instance",
      "project_id": "xxx-yyy-zzz",
      "region": "us-central"
    }
  },
  "timestamp": "2022-10-17T14:43:12.897056Z",
  "severity": "ERROR",
  "labels": {
    "INSTANCE_UID": "...",
    "LOG_BUCKET_NUM": "33"
  },
  "logName": "projects/xxx-yyy-zzz/logs/cloudsql.googleapis.com%2Fpostgres.log",
  "receiveTimestamp": "2022-10-17T14:43:14.520407419Z"
}

Previous logs shows usage of the replication slot with the PID 219872 less than a minute prior. Looking back in the logs it appears to be a normal behaviour that cause no error when the replication slot is called twice with at least a minute and a half of delay.
But two times it wasn’t the case and it made the stream fail permanently.

Is there anything I can do to avoid this happening and thus make it suitable for prod ?

  • Wrote a related entry in GCP Community forum, you can find it [here](https://www.googlecloudcommunity.com/gc/Serverless/Datastream-with-CloudSQL-PostgreSQL-fails-permanently/m-p/480127#M710) – Thomas Foubert Oct 20 '22 at 09:49
  • Wrote a Public Issue on Google's tracker, you can find it [here](https://issuetracker.google.com/issues/254626250) – Thomas Foubert Oct 20 '22 at 14:33

0 Answers0