4

I have my backend replicas in order to provide horizontal scaling. Back end has apollo graphql subscriptions. There is also a microservice providing notifications for particular subscriptions.

Since I don't keep any state in backend, I've tried to solve the problem with implementing Redis PUB/SUB. When microservice receives an event, it will publish to backends.

In the subscription resolver of back end I have

webhookCalled: {
    subscribe: withFilter(
            () => pubsubMyEvent.asyncIterator([MY_EVENT]),
            (payload, _, context) => {
                return context.dataValues.id == payload.userid;
            }
        )
    }

In above code I am trying to filter out subscriptions that payload is not addressed to. I am not so sure how costly is withFilter routine. When I receive PUB from Redis am calling

pubsubMyEvent.publish(MY_EVENT, { myEventData });

What I don't like here is that each back will process(publish(...)) all events, even if at the end only one backend will actually send the subscription message to graphql client.

Question: how can I efficiently handle sending events to graphql subscription clients, while having scalable back end? Perhaps not to bother all back end copies, when single websocket connection need to be notified. Should I keep track of all connected clients in Redis so Redis knows where each graphql subscription client is connected?

enter image description here

Pablo
  • 28,133
  • 34
  • 125
  • 215

1 Answers1

0

To send the published event to all the clients subscribed to the same channel is in the nature of redis.

A possible solution keeping the same architecture could be using a different channel for each user rather than using only one channel. The backed should subscribe to channel MY_EVENT + user.uuid once a new user connect and unsubscribe from same channel once the user disconnected. On the other side, the service, once webhook is called should publish not on the global channel but on MY_EVENT + user.uuid channel.

Daniele Ricci
  • 15,422
  • 1
  • 27
  • 55
  • This means I need to have some watchdog service in case one of the backends will pass away or app crashed, to unsubscribe user or all users on that node. Number of connections should be strictly synced in order this to work. – Pablo Jul 16 '20 at 15:45
  • Sorry @Pablo what app is? I suppose it is the web client which connects through WS to the backends. Anyway, since the ones which register to a channel is the beckend, when a backend crashes it doesn't need to unsubscribe and on the other socket side redis will drop all channels kept alive by the crashed backend. When an app crashes it's up to the backend to unregister (you need `ws.on("close", handleUnregister())` anyway). No watchdog service is required. Last, number of connections is untouched, the channel is something logical within a connection. – Daniele Ricci Jul 16 '20 at 16:27
  • it's web browser or mobile app, using the same websocket connection. It's good to know that redis will drop subscription. Thx – Pablo Jul 16 '20 at 16:40
  • Ok @Pablo , so my supposition about the was correct. Do you agree you don't need any watchdog service with my proposal or do you see some other problem? – Daniele Ricci Jul 16 '20 at 16:51
  • while this is legit solution, it looks like there is native way to accomplish this if I use Redis implementation of PubSub in contrast to my diagram, where I used default `EventEmitter`. – Pablo Jul 17 '20 at 12:59
  • Sorry @Pablo but this is the native way: PubSub is designed to dispatch the message to **all** the clients subscribed to the channel, if you want to distinguish the clients which receive a message you have to use distinct channels. If you want to use the same channel you have no other options than check if the incoming message is to be handled or if it is to be thrown (i.e. handled by another client) – Daniele Ricci Jul 17 '20 at 13:23