0

Let's say I have a service A and a service B.

A receives requests from clients (via a HTTP interface).

For each request A publishes an event using Reactor Kafka (producer).

Service B consumes those events, using Reactor Kafka (consumer).

In case of pressure over the system, can service B communicate to service A to slow down, and A will react to this by not accepting more requests from clients, till B decides it can continue ?

Is this something that can be achievable using Reactor Project ?

raduone
  • 191
  • 1
  • 10
  • Kafka is an event-store. It does not push the records down to the consumers. How does it matter to you how fast A is sending events to B? You should be controlling how fast B is consuming the events. – Prashant Pandey Jul 28 '19 at 16:40
  • I know it doesn't push to consumers. Most of the use cases it would suffice if the consumer is stable and just add more consumers in order to accommodate the publishing rate. In this use case I wanted to see if there is a way to push somehow a command to the producers to adjust their publishing rate not to grow a log of unprocessed events too big. This is because for this particular scenario, I want maybe just some specific rate of queing in Kafka, but not too much, thus I won't leave clients to push requests into the producers, just so I would find out in the consumer the events are legacy. – raduone Jul 29 '19 at 15:13

1 Answers1

0

I don't think you need to do that when using Kafka. Kafka logs (which are not queues..) are meant to be read by multiple application, each of those application consuming with their own rhythm the same records. For this, Kafka maintain offsets for each application ( consumer group).

For example, in your case, if A produce at 2 records/s and B consumer at 1 record/s, Kafka will handle the records produced by A and will make them available for B ( and application which wants to consume them). Should be fine..

Obviously, if A still produce a higher record rate than B forever, you might have to scale your consumer and add one or multiple a B' consumers. This is how Kafka is by design.

Yannick
  • 1,240
  • 2
  • 13
  • 25
  • Indeed that is a strategy. You could just use backpressure at application level (not across network boundaries), to just make sure you don't reach an out of memory situation and let the consumer handle events at its pace. But in my use case I don't want to let service A, the producer, to keep publishing the events since until the service B, consumer, starts to handle them, they could be legacy. So I would rather restrict all traffic as a measure and see the needs for scaling not by measuring unprocessed events, but rather by seeing A rejecting clients. – raduone Jul 24 '19 at 12:44
  • I am curious if Reactor also supports this scenario, because to me it's not clear from their documentation, or other information I've searched. In documentation it is stated that you could have an all reactive pipeline supported by backpressure, but I am not sure that also refers to multiple services (across network boundaries). – raduone Jul 24 '19 at 12:46
  • 1
    From what I see from reactor : "The number of in-flight sends can be controlled using the maxInFlight option", might help.. but it looks like static config behavior , still looking for any kind dynamic backpressure mechanism – Yannick Jul 24 '19 at 13:41
  • Yeah I saw that property, it seemed to me it's used only for the producer pipeline, as in my example http request -> service A -> publish to Kafka. But I think I'll try to code it, try an example, and see how it works. I'll update this thread with what I found afterwards. – raduone Jul 24 '19 at 13:50
  • Ok, look forward to have your updates. There is another post dealing with same kind of subject btw (but still the same kind of answer) : https://stackoverflow.com/questions/49663383/back-pressure-in-kafka – Yannick Jul 24 '19 at 13:51
  • Thank you for your suggestion and your time, if you find anything else please update this thread. – raduone Jul 24 '19 at 13:52