I am using webSockets to connect a javascript webSocket client to a java webSocketServer (from an Android application), using the Java-WebSocket library. The Android app sends a small message every few milliseconds to the javascript client.
Using the basic (and intuitive) approach for this situation, the delay between received messages, measured inside the javascript Client show (aproximately) the following pattern: 200 ms, 0.1 ms, 0.1 ms, 0.1 ms, 0.1 ms, 0.1 ms, 0.1 ms, 200 ms, 0.1 ms, 0.1 ms, 0.1 ms, 0.1 ms, 0.1 ms, 0.1 ms, 200 ms, 0.1 ms, 0.1 ms, 0.1 ms, 0.1 ms, 0.1 ms, 0.1 ms ...
This is the effect of the Nagle algorithm, that is set by default, clumping several messages before sending them.
Since I have not found a way to guarantee its deactivation, I follow the approach proposed in this old question, sending an acknowledge message from the client to the server, and the system behaves properly, but since the acknowledge message has no real purpose (it is more a hack), it should be avoided.
The question is, this keeps being the best solution to this problem? Do you know any way to avoid the clumping?
Thank you.