20

I'm currently developing a Java WebSocket Client Application and I have to make sure that every message from the server is received by the client. Is it possible that I lose some messages (once they are sent from the server) due to a connection interruption? WebSocket is based on TCP so this shouldn't happen right?

Takahiko Kawasaki
  • 18,118
  • 9
  • 62
  • 105
Adrian Krebs
  • 4,179
  • 5
  • 29
  • 44
  • Either you receive the message, or you lose the connection entirely. – user253751 Sep 09 '15 at 08:02
  • But after I lost the connection entirely the websocket clientEndpoint wouldn't work anymore right? That's why i built a reconnect-handler which sends a ping/pong message each 30 seconds to check if the connection is still up and if not it tries to create a new connection to the server. – Adrian Krebs Sep 09 '15 at 08:08
  • 2
    received does not mean it was read – symbiont Jun 09 '19 at 19:35

4 Answers4

23

It can happen. TCP guarantees the order of packets, but it does not mean that all packets sent from a server reach a client even when an unrecoverable trouble happens in an underlying network. Imagine someone pulls out your LAN cable or switches off your WiFi access point at the worst timing while your application is communicating with your server. TCP does not overcome such a trouble.

To ensure that every WebSocket message sent from your server reaches your client, you have to implement some kind of SYN/ACK in the application layer.

Takahiko Kawasaki
  • 18,118
  • 9
  • 62
  • 105
  • 2
    it looks like you are suggesting that TCP packets that don't arrive can go unnoticed. isn't the problem in the application layer, where the hardware has already sent an ACK but the application crashes before reading the received message? – symbiont Jun 09 '19 at 19:34
  • 1
    Is application-layer ACK enough to ensure exactly *0%* loss packets? We are developing an application where even 0.001% loss will cause serious trouble. Thanks! – ch271828n Apr 13 '20 at 02:21
  • @symbiont That's exactly what I thought as well, afaik TCP does guarantee delivery and this answer suggests it doesn't, my understanding is that due to the async (fire and forget) communication model used by websockets it can't guarantee delivery at the application layer but the OS/hardware will still retransmit the packet/frame if it gets lost midway (provided the TCP connection is still open) – chomba Aug 25 '23 at 22:36
2

TCP is a guaranteed protocol - packets will be received in the correct order by the higher application levels at the far end (this is as opposed to UDP which is a send and hope protocol).

Generally speaking TCP should be be used for connections where all the data must arrive correctly at the far end. UDP is used where a missing packet can be dropped without significant issue (e.g. streaming services, NTP updates)

Stephen
  • 29
  • 2
1

In my game, to counter missed web socket messages, I added an int/long ID for each message. When the client detects that something is wrong in the sequence of IDs it receives, the client will request for new data from the server to be able to recover properly.

  • 1
    are you thinking of web rtc? web sockets should be delivering messages in order, within the socket. – Jayen Dec 16 '19 at 02:22
  • Websockets use TCP as a transport and that ensures correct delivery and ordering of packets. Did you mean some kind to mechanism in the application layer to find out which response was intended for which request? – Aritra Sur Roy Feb 09 '22 at 07:34
0

TCP has something called Control Flow- which means it provides reliable, ordered, and error-checked delivery. In other words TCP is a protocol that checks constantly whether the data arrived.

This protocol has different mechanisms to ensure that. You can see the difference between TCP and UDP (which has no control flow) in the link below.

Difference between tcp and udp

Yaron
  • 45
  • 3