1

Problem:

front-end page make x parallel requests (let's call it first group), the next group (x request) will be after 5 seconds, the first request (of the first group) set the cache from DB. the other x-1 requests got empty array insted of wait to first request to done his job. the second group and the all next requests got proper data from cache.

What is the best practics to lock other threads until the first done (or fail) in stateless mechanism?

EDIT:

The cache module allow use trigger of set chache but it's not work since it stateless mechanism.

const GetDataFromDB= async (req, res, next) => {
  var cachedTableName = undefined;
  //  "lockFlag" uses to prevent parallel request to get into  critical section (because its  take time to set cache from db)
  //  to prevent that we uses "lockFlag" that is short-initiation to cache.
  //
  if ( !myCache.has( "lockFlag" ) && !myCache.has( "dbtable" ) ){
      // here  arrive first req from first group only
      // the other x-1 of first group went to the nest condition
      // here i would build mechanism to wait 'till first req come back from DB (init cache)
      myCache.set( "lockFlag", "1" )  

      const connection1 = await odbc.connect(connectionConfig); 

      const cachedTableName = await connection1.query(`select * from ${tableName}`);
      
      if(cachedTableName.length){
          const success = myCache.set([
              {key: "dbtable", val: cachedTableName, ttl: 180},
          ])
          if(success)
          {
              cachedTableName = myCache.get( "dbtable" );
          }
      }
      myCache.take("lockFlag");
      connection1.close();
      return res.status(200).json(cachedTableName ); // uses for first response.
  }
  // here comes x-1 of first group went to the nest condition and got nothing, bacause the cache not set yet
  // 
  if ( myCache.has( "dbtable" ) ){
    cachedTableName = myCache.get( "dbtable" );
  }
  return res.status(200).json(cachedTableName );
}
shdr
  • 868
  • 10
  • 25
  • Commenting on the general premise of your question: In the trade-off of letting those first group's x-1 requests to just retrieve data from db rather than from cache, versus locking those x-1 requests so that they wait for the 1 request to complete, you lean towards the latter? Why? – OfirD Jun 04 '21 at 13:50
  • @OfirD, because the x could be large anought to overload the connection to db. it's crashing the app sometimes. – shdr Jun 05 '21 at 20:43
  • Sounds like an [XY problem](https://meta.stackexchange.com/questions/66377/what-is-the-xy-problem). If the crashing connection is the real problem, then you should ask directly about that, add relevant connection(s) code and the actual errors. Going with the current way doesn't really solves the problem, as you noted yourself in your answer. – OfirD Jun 07 '21 at 21:52
  • 1
    @OfirD, this is **code development question**, what is the best approach, that's the name I gave to the subject, some will say that when first group return empty it's not an issue at all, some will say to thicken the connection to DB (like you), but it's not an answer for **this** question. – shdr Jun 08 '21 at 07:38

2 Answers2

2

You can try the approach given here, with minor modifications to apply it for your case.

  • For brevity, I removed comments and shortened variables names.

Code, then explanation:

const EventEmitter = require('events');
const bus = new EventEmitter();

const getDataFromDB = async (req, res, next) => {
  var table = undefined;
  if (myCache.has("lockFlag")) { 
     await new Promise(resolve => bus.once("unlocked", resolve));
  }
  if (myCache.has("dbtable")) {
     table = myCache.get("dbtable");
  }
  else {
      myCache.set("lockFlag", "1");  
      const connection = await odbc.connect(connectionConfig); 
      table = await connection.query(`select * from ${tableName}`);
      connection.close();
      if (table.length) {
          const success = myCache.set([
              { key: "dbtable", val: table, ttl: 180 },
          ]);
      }
      myCache.take("lockFlag");
      bus.emit("unlocked");
  }
  return res.status(200).json(table);
}

This is how it should work:

  • At first, lockFlag is not present.
  • Then, some code calls getDataFromDB. That code evaluates the first if block to false, so it continues: it sets lockFlag to true ("1"), then goes on to retrieve the table data from db. In the meantime:
  • Some other code calls getDataFromDB. That code, however, evaluates the first if block to true, so it awaits on the promise, until an unlocked event will be emitted.
  • Back to the first calling code: It finishes its logic, caches the table data, sets lockFlag back to false, emits an unlocked event, and returns.
  • The other code can now continue its execution: it evaluates the second if to true, so it takes the table from the cache, and returns.
OfirD
  • 9,442
  • 5
  • 47
  • 90
  • I think EventEmitter does not provide a bus for multiple backend instances. It is better to replace it with a Message Queueing Service(solution may include EventEmitter still). – raxetul Jun 10 '21 at 08:28
  • 1
    @raxetul, the OP doesn't mention a multiple backend instances environment. – OfirD Jun 10 '21 at 08:57
  • 1
    Our comments will warn who wants to use this answer in distributed environments, ;) – raxetul Jun 10 '21 at 10:24
0

As workaround i add "finally" scope to remove lock-key from cache after first initiation, and this:

while(myCache.has( "lockFlag" )){
        await wait(1500);
}

And the "wait" function:

function wait(milleseconds) {
    return new Promise(resolve => setTimeout(resolve, milleseconds))
}

(source)

This is working, but still could be time (<1500 ms) that there is cache and the thread not aware.

I'ld happy for batter solution.

shdr
  • 868
  • 10
  • 25