0

So, for example, we have a function that performs some kind of transaction, it gets imported in different parts of the application where it keeps being called.

// myDb.mjs
export const db = mysql.createConnection({
    host: 'localhost',
    user: 'root',
    password: 'password',
    database: 'test'
});


// myFunctions.mjs
export function query() {
  db.query('SELECT * FROM table', (error, res) => {
    console.log(res)
  })
}

Now, in PHP using PDO the connection lives until the scripts ends (see related: Is it necessary to close PDO connections), How do I handle this in node using mysql ?

  1. I see that there are connection.end(), connection.destroy(), When to call them, after the query executes, at the end of the script, do I have to end the connection manually ?
  2. Does the server opens a connection to the database every time a fetch reqeust comes through from the client ?
tadman
  • 208,517
  • 23
  • 234
  • 262
RDU
  • 812
  • 1
  • 9
  • 22
  • 1
    In most cases you will want to use a pool and release connections back to the pool when done. If it helps, I wrote an article for people just getting started with this: https://evertpot.com/executing-a-mysql-query-in-nodejs/ – Evert May 24 '23 at 17:13
  • Does this answer your question? [When to close MySQL connection using node-mysql?](https://stackoverflow.com/questions/13266013/when-to-close-mysql-connection-using-node-mysql) – Charlie May 24 '23 at 17:33
  • (I am ignorant about node.js) Does a single instance of node.js live forever? Or does every request launch a new instance (a .a PHP)? PHP benefits from have a single connection per PHP invocation. – Rick James May 24 '23 at 21:24

2 Answers2

2

Ideally you don't close them, you return them to a connection pool. Database connections are often a contentious resource so you should create them only when necessary.

In the code you have here, a singular connection is created in the module and used for all subsequent operations. If that connection fails, expires, or is otherwise incapacitated, your program will just emit errors.

I'd recommend using a light abstraction layer like Sequelize. This has connection pool support, issuing connections that are tested before issuing, and as a bonus your code becomes more database agnostic.

You'll also want to use promises, as callback-driven code gets extraordinarily messy really fast.

tadman
  • 208,517
  • 23
  • 234
  • 262
  • @easleyfixed Not sure PHP is the younger brother here. ASP came out in 2000, PHP dates from 1995. If anything ASP is a response to PHP by Microsoft. – tadman May 24 '23 at 17:32
  • So, the idea behind the connection pool is that it manages the connections for you ? closes opens more if needed ? If I use the code in the example I'd have to do that myself ? – RDU May 24 '23 at 17:39
  • 1
    Exactly. A good connection pool will take care of everything for you, cleaning up connections as required, refreshing them when they expire, etc. In practice you'll get faster responses, as the cost of establishing a connection can often far outstrip the cost of the query. – tadman May 24 '23 at 17:44
  • "tested before issuing" -- I wonder if anyone has compared the time taken for testing versus simply reconnecting. (MySQL is very fast at creating connections.) – Rick James May 24 '23 at 21:20
  • @RickJames Most of the libraries I've worked with enough to get into the guts of the connection pool only check periodically, and it's effectively a `SELECT 1` or something equivalent, it takes under a millisecond. Reconnecting has a lot more overhead, you need to go through the TLS negotiation, the auth procedure, potentially a DNS lookup, and more. – tadman May 25 '23 at 19:33
  • @tadman - OK that is efficient at keeping the connection alive. But what about clearing @-variables, dropping temp tables, etc? That is, resetting the connection before giving it to the next user. And I agree that TLS, etc is probably so costly as to make the rest feel cheap. I have not seen any command to clear variables, etc.; have you? – Rick James May 25 '23 at 19:47
  • 1
    @RickJames If you're arguing against connection pooling in general, that's an opinion you're entitled to. In my experience with them there's a certain responsibility on the connection's user to not leave it in a state that's completely wrecked. The amount of unwinding done depends *entirely* on the pooling library in question. Some do an excellent job, some nothing at all. For any given library you'll need to understand what you can get away with, and what will cause chaos. – tadman May 25 '23 at 19:50
  • 1
    @RickJames For any application that makes use of connection pooling the best practice is to not create variables at all. If you need that sort of thing you can always encapsulate that in a stored procedure, keep it contained. This can also help with complicated locking code, etc. as you can do the unwinding in the stored procedure itself. If you treat the connection as a shared resource instead of an exclusive one you will need to be more careful as a rule. – tadman May 25 '23 at 19:54
0

A better approach is to use a managed pool of connections. It cashes a set of connections and reuses them when necessary. This method enhances the performance and the response time as well. The pool takes care of opening, reusing, and closing the individual connections.

 var pool = await mysql.createPool(connectionStr);
 ...
 ...

 //When the application closes
 pool.close()
Charlie
  • 22,886
  • 11
  • 59
  • 90