I have a web application written in Go and this application makes queries to a Postgres database. When I get back my records, I am iterating over the records with rows.Next, and scanning every row to a struct with rows.Scan.
How can I make this whole process faster?
I think this program is not very efficient because with every new record to the database, the time to scan all the records will grow as well. I thought about using goroutines, but I am worried that maybe two goroutines will scan the same data. Can I prevent this by using Mutexes? But what is the point of using concurrency, if we are preventing other goroutines from accessing the data by using mutex locks?
Here is the code I am planning to improve:
func GetUsers() ([]publicUser, error) {
query := `select user_id, first_name, last_name, registered_at from users;`
rows, err := db.Query(query)
if err != nil {
return nil, err
}
var us []publicUser
for rows.Next() {
var u publicUser
if err = rows.Scan(&u.UserId, &u.FirstName, &u.LastName, &u.RegisteredAt); err != nil {
log.Println(err, "GetUsers")
return nil, err
}
us = append(us, u)
}
if err := rows.Err(); err != nil {
log.Println(err, "GetUsers2")
return nil, err
}
return us, nil
}
Should I fire a new goroutine like this ?:
func GetUsers() ([]publicUser, error) {
query := `select user_id, first_name, last_name, registered_at from users;`
rows, err := db.Query(query)
if err != nil {
return nil, err
}
var us []publicUser
for rows.Next() {
go func() {
var u publicUser
if err = rows.Scan(&u.UserId, &u.FirstName, &u.LastName, &u.RegisteredAt); err != nil
{
log.Println(err, "GetUsers")
return nil, err
}
us = append(us, u)
}()
}
if err := rows.Err(); err != nil {
log.Println(err, "GetUsers2")
return nil, err
}
return us, nil
}