I have a gRPC service that accepts streaming messages from a client. The client sends a finite sequence messages to the server at a high rate.
The result is the server buffering a large number of messages (> 1GB) and it's memory usage skyrocketing and then slowly draining as it handles them.
I find that even if I await all async calls, the client just keeps pushing messages as fast as it can. I would like the client to slow down.
I have implemented an explicit ack response that the client waits for before sending the next message, but since http/2 already has flow control semantics built in I feel like I'm reinventing the wheel a bit.
I have two concrete questions.
Does the C# implementation automatically apply backpressure? For example, if the consuming side is slow to call MoveNext on the async stream, will the client side take longer to return from it's calls to WriteAsync?
Does the C# implementation of gRPC have any configurable way of limiting the buffering of messages for a streaming rpc call. For example, capping the number of buffered messages or limiting the amount of space in the call's buffer.