5

The RTCDataChannel API does not provide any kind of flow/control or back-pressure, does this mean, that a sender could, theoretically, crash the browser of the receiver ? In my opinion the browser, (Chrome, Firefox, etc. all use SCTP under the hood), reads from the SCTP connection and schedules to run the js-callback consuming the packet. If the event queue cannot not keep up with the sender, the browser basically reads continuously packets while storing the packets in a buffer, which grows indefinitely. So when you connect two browsers, the sender can actually always overwhelm the other one, because there is no barrier like TCP receive windows or something similar.

This applies to the websocket api as well.

Do I just miss something or are these API just broken ? If I'm right, this would be a severe security issue when talking to unauthenticated browsers (in a torrent scenario for instance).

Kr0e
  • 2,149
  • 2
  • 24
  • 40
  • 1
    I would ASSUME that the underlying code in the browser would take this into account but you know what happens when you make an assumption... – Benjamin Trent Nov 21 '14 at 14:06
  • 1
    FYI: the other way round makes also problems. If you try to send data, you can send as fast as possible. The data is buffered internally. But at some point you will run out of memory because the network cannot keep up with the buffered data. WebRTC offers the bufferedamount info, but this means you have to poll on this variable to keep the buffer size within a high and low watermark. Crazy... This seems just broken (or at least not fully thought through) – Kr0e Nov 21 '14 at 14:42

1 Answers1

3

The webrtc data channel used to be based on UDP. During that time there was artificial throttling imposed by the browser in order to prevent network flooding. This was the case until chrome v32, I believe.

Nowadays the data channel is based on SCTP which has build-in flow control (FC) and there is no browser throttling any more (thank God). The parameters that control FC are not exposed through the API but that doesn't mean there is no FC.

I am not familiar with the implementation of webrtc in Chrome/FF but I don't think you can crash the browser with a simple flood attack. The "producer is faster than the consumer" is a pretty old problem.

That said, I have been working with the data channel, for more than an year now and have seen my browser crash almost on a daily basis, so there are probably many bugs in the webrtc implementation. Hopefully they won't pose any threat to security.

Sending big chunks of data useing webrtc data channel is not a particularly pleasant experience. The API doesn't offer a "channel is ready for write" callback or anything of the sort, so, yes!, you have to poll the bufferedamount value and try to keep it inside an optimal window. To add insult to injury bufferedamount used to be broken under Windows versions of Chrome, it was always 0. But I think they fixed this in chrome v37 or around that time.

IMHO the webrtc API is not very well thought through but it does the job and honestly I cannot think of any js API that is well thought through.

Etienne Martin
  • 10,018
  • 3
  • 35
  • 47
Svetlin Mladenov
  • 4,307
  • 24
  • 31
  • I see. As far as I can tell the SCTP implementation is definitely able to control flow, but it is only applied to the browser kernel. If the browser kernel is not fast enough to read, than yes. But the browser HAS to keep reading, since every datachannel connected to the same host is multiplexed by a single SCTP connection. So the browser can not tell for which data channel the next packet is dedicated. – Kr0e Nov 23 '14 at 14:12
  • Imagine an app, which opens a data channel and let the user choose a directory to store data. If the user waits, the other side should be able to send continuesly data, which has to be handled by the js app. The app cannot say "stop it"! I hope they really think about this issue in a later version. – Kr0e Nov 23 '14 at 14:12
  • If the "producer" mindlessly just sends data and the "consumer" cannot consumer it for what ever reason then buffers on both side will start to increase. However the buffer on the sending side cannot increase forever. After a certain threshold (I think it was a couple of megabytes) the data channel will be forcefully closed and all buffered data discarded. – Svetlin Mladenov Nov 23 '14 at 17:55
  • Sry, my example was a bit incomplete. The producer sends as fast as possible without buffering. I argue that the consumer browser will read the data and pass it to js callback. The js callback have to decide now what to do. When it still waits for user action the app should buffer the data or slow the producer down. And that's my point. Any app relying on such a behavior is fundamentally flawed. Or lets imagine the consumer must fetch a database query. Any request<->response scenario would be influenced by this issue. – Kr0e Nov 23 '14 at 18:26