1

I'm setting up the ability for my node server to load up the proper information from my DB (postgres) to render a certain client View. I'm currently refactoring my server code to follow an Object Oriented Approach with class constructors.

I currently have it so that Readers are a class of functions that are responsible for, well, running read queries on my database. I have inherited classes like MainViewReader and MatchViewReader, and they all inherit from a "Reader" class which instantiates a connection with postgres using the pg-promise library.

The issue with this is that I can't use two view readers or they will be opening up duplicate connections, therefore I am finding myself writing redundant code. So I believe I have two design choices, and I was wondering what was more efficient:

  1. Instead of setting the pattern to be by servlet view, instead set the pattern to be by the table read using that class, ie NewsTableReader" MatchTableReader. Pro's of this is that none of the code is redundant and can be used in different servlets, Con's is that I would have to end the connection to postgres on every instance of the Reader class before instantiating a new one as such:
const NewsTableReader = NewsTableReader()
await NewsTableReader.close()

const MatchTableReader = MatchTableReader()
await MatchTableReader.close()

  1. Just having view readers. Pro's is that this is only one persisting connection, cons is that there is lots of redundant code if i'm loading data from the same tables in different views, example:
const MatchViewReader = MatchViewReader()
await MatchViewReader.load_news()
await MatchViewReader.load_matches()

Which approach is going to affect my performance negatively the most?

Patrick W
  • 1,485
  • 4
  • 19
  • 27
snejad123
  • 11
  • 2

1 Answers1

2

You've correctly ascertained that you should not create multiple connection pools with the same connection options 1. But this doesn't have to influence the structure of your code.

You could create a global pool, and pass that to your Reader constructors, as a kind of Dependency injection:

class Reader {
  constructor(db) {
    this._db = db
  }
}

class NewsTableReader extends Reader {}
class MatchTableReader extends Reader {}


const pgp = require('pg-promise')(/* library options */)
const db = (/* connection options */)

const newsTableReader = new NewsTableReader(db)
const matchTableReader = new MatchTableReader(db)

await newsTableReader.load()
await matchTableReader.load()
// await Promise.all([newsTableReader.load(), matchTableReader.load()])

An other way to go is to use the same classes with the extend event of the pg-promise library:

const pgp = require('pg-promise')({
  extend(obj, dc) {
    obj.newsTableReader = new NewsTableReader(obj);
    obj.matchTableReader = new MatchTableReader(obj);
  }
})

const db = (/* connection options */)

await db.newsTableReader.load()

await db.tx(async t => {
    const news = await t.newsTableReader.load();
    const match = await t.matchTableReader.load();
    return {news, match};
});

The upside of the extend event is, that you can use all of the functionality (eg. transactions and tasks) provided by the pg-promise library across different models. The thing to keep in mind is, that it creates new objects on every db.task(), db.tx() and db.connect() call.

boromisp
  • 21
  • 2
  • When you are saying that creation of a new repository object within task/tx/connect represents a downside, what do you mean by that? Why is that an issue? – vitaly-t May 28 '19 at 12:53
  • My thought was, that this could have performance implications, especially with dozens of repositories. It's not really a downside, more like something to keep in mind. It might be insignificant in most cases. – boromisp May 30 '19 at 19:51