0

In my Go application I use crontab package to run Tracker function every minute. As you can notice from the code I call PostgreSQL function. To interact with the PostgreSQL database, I use the gorm package. Application worked several days without any problem but now I notice an error in logs: pq: sorry, too many clients already. I know that same questions was asked several times in StackOverflow before. For example in this post people advice to use Exec or Scan methods. In my case as you can see I use Exec method but anyway I have error. As far as I understand, each database request makes a separate connection and does not close it. I can't figure out what I'm doing wrong.

main.go:

package main

import (
    "github.com/mileusna/crontab"
)

func main()  {
    database.ConnectPostgreSQL()
    defer database.DisconnectPostgreSQL()

    err = crontab.New().AddJob("* * * * *", controllers.Tracker); if err != nil {
        utils.Logger().Fatal(err)
        return
    }
}

tracker.go:

package controllers

import (
    "questionnaire/database"
    "time"
)

var Tracker = func() {
    err := database.DBGORM.Exec("CALL tracker($1)", time.Now().Format("2006-01-02 15:04:05")).Error; if err != nil {
        utils.Logger().Println(err)  // ERROR: pq: sorry, too many clients already
        return
    }
}

PostgreSQL.go:

package database

import (
    "fmt"
    "github.com/jinzhu/gorm"
    _ "github.com/jinzhu/gorm/dialects/postgres"
    "github.com/joho/godotenv"
    "questionnaire/utils"
)

var DBGORM *gorm.DB

func ConnectPostgreSQL() {
    err := godotenv.Load(".env")
    if err != nil {
        utils.Logger().Println(err)
        panic(err)
    }

    databaseUser := utils.CheckEnvironmentVariable("PostgreSQL_USER")
    databasePassword := utils.CheckEnvironmentVariable("PostgreSQL_PASSWORD")
    databaseHost := utils.CheckEnvironmentVariable("PostgreSQL_HOST")
    databaseName := utils.CheckEnvironmentVariable("PostgreSQL_DATABASE_NAME")

    databaseURL:= fmt.Sprintf("host=%s user=%s dbname=%s password=%s sslmode=disable", databaseHost, databaseUser, databaseName, databasePassword)

    DBGORM, err = gorm.Open("postgres", databaseURL)
    if err != nil {
        utils.Logger().Println(err)
        panic(err)
    }

    err = DBGORM.DB().Ping()
    if err != nil {
        utils.Logger().Println(err)
        panic(err)
    }

    DBGORM.LogMode(true)
}

func DisconnectPostgreSQL() error {
    return DBGORM.Close()
}
Jonathan Hall
  • 75,165
  • 16
  • 143
  • 189
Nurzhan Nogerbek
  • 4,806
  • 16
  • 87
  • 193
  • Is this the only code your application is using to talk to postgres? – mkopriva Aug 08 '19 at 10:17
  • Now error appears twice in `tracker.go` file but in fact I have several controllers where I make simple queries to PostgreSQL database also with `gorm` package. – Nurzhan Nogerbek Aug 08 '19 at 11:35
  • 2
    The error is generated by the database not by `tracker.go` so it may be that those other queries in your app aren't closing their connections properly, it is also possible to run hundreds of psql sessions against the database through the terminal and not from outside your app and you would see the error in `tracker.go` if the connection limit is reached. – mkopriva Aug 08 '19 at 11:40
  • 2
    ... so don't focus necessarily on `tracker.go`, instead try to figure out why your database is reaching it's connection limit. The cause could be anything that's using the database. It could also be just inadequate configuration of the connection limit, say if it's set to 5, you'll run into this issue very quickly. – mkopriva Aug 08 '19 at 11:41
  • 3
    ... finally, if you're still unable to locate the cause of the issue, try using [`pg_stats_activity`](https://www.postgresql.org/docs/current/monitoring-stats.html) to see the active processes, the sql queries they are executing, from what app they are being executed, etc. – mkopriva Aug 08 '19 at 11:47
  • 2
    ... another thing to consider is how *long* do your queries take to execute, including the `tracker` procedure. Your Go code may be correctly closing every `rows` object and properly releasing the connections to the pool, but if the SQL code just takes too long to finish it may take a while before your Go code gets a chance to execute `rows.Close`, or return from `db.Exec`... the above mentioned `pg_stats_activity` can help with identifying long running queries, or queries that are stuck, waiting for some lock to be released. – mkopriva Aug 08 '19 at 11:57
  • @mkopriva in other controllers I use `rows.Close` method as you adviced. I have a question. Do I understand correctly that each of my requests creates a new connection and closes it after completion ? I make this query `SHOW max_connections;` which return `2000`. Then I make query `SELECT COUNT(*) FROM pg_stat_activity;` which return `127`. Is it mean that when number of active connections (127) became bigger than max number of connections (2000) I will see this error again? – Nurzhan Nogerbek Aug 08 '19 at 12:27
  • 1
    1) Not necessarily, `*database.DB` is a pool of connections, when you run a query an idle connection is retrieved from this pool and then put back after the query is done (e.g. when `rows.Close` is executed). But if there are no idle connections in the pool, because all of them are in use, then `DB` will attempt to open a new one and use that. What happens to this newly opened connection after it's done and idle i do not know, but i assume it's kept in the pool for some time and if it's idle for too long, maybe then it is closed by `DB`.. but that's just my speculation, i don't know for sure. – mkopriva Aug 08 '19 at 12:40
  • 1
    2) That is my understanding at least. If the number of *all* connections, not just active ones but idle ones too, reaches the configured limit you'll see that error. – mkopriva Aug 08 '19 at 12:43
  • Try reading [this](https://www.citusdata.com/blog/2018/02/15/when-postgresql-blocks/) and then consider whether it's possible that your `tracker` procedure causes a table lock to occur, and then consider whether it's possible that the `tracker` procedure might take longer than just a minute to finish... and if you can see both of these scenarios as possible, then that is probably the issue... – mkopriva Aug 08 '19 at 12:54
  • ... In that case you either have to optimize the procedure to make it run under a minute or you need to widen the time window between the calls to that procedure because if you don't, over time, the number of connections opened and unreleased will only grow, never going down, as every call to tracker must wait for the previous call to finish. – mkopriva Aug 08 '19 at 12:55

0 Answers0