0

I try to develop a server and a client programs in Python 2.7, which can switch in UDP or in TCP, based on this echo program :

TCP vs. UDP socket latency benchmark

For now, i just try to program it as local host

When i run it in TCP (is_UDP = False), the server program shows me that there is no packet lost (total_perdu = 0)

But if i run it in UDP (is_UDP = True), it gives me some packets lost.

This is my code for the server :

import socket
from numpy import *

server_address = ("127.0.0.1", 4444)
client_address = ("127.0.0.1", 4445)
bufferSize  = 4096

# is_UDP = True
is_UDP = False

# Create a datagram socket
if is_UDP == True:
    UDP_Server_Socket_in = socket.socket(socket.AF_INET, socket.SOCK_DGRAM, socket.IPPROTO_UDP)
    UDP_Server_Socket_in.bind(server_address)

    UDP_Server_Socket_out = socket.socket(socket.AF_INET, socket.SOCK_DGRAM, socket.IPPROTO_UDP)
    UDP_Server_Socket_out.connect(client_address)

    connection = UDP_Server_Socket_in
    print("UDP server is running...")
else :
    TCP_Server_Socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
    TCP_Server_Socket.bind(server_address)
    TCP_Server_Socket.listen(1)

    connection, client_address = TCP_Server_Socket.accept()
    print("TCP server is running...")

t = 0
total_perdu = 0
i = 0

while(True):
    i += 1

    # Receive packet from client
    data_2= connection.recv(bufferSize)
    tab=fromstring(data_2,dtype="int32")
    size=len(data_2)

    while size<bufferSize:
        data_2= connection.recv(bufferSize-size)
        size+=len(data_2)

    if data_2:
        perdu=int(tab[0])-t-1
        sperdu=""
        if perdu>0:
            total_perdu+=perdu
            sperdu = "(%d)"%(perdu)

        print "Receive data : %s  %d  %d %s" % (tab[0], len(tab), total_perdu,sperdu)
        t=int(tab[0])

And this is my code for the client:

import socket
from numpy import *
import time


server_address = ("127.0.0.1", 4444)
client_address = ("127.0.0.1", 4445)

# Packets variables
packet_size = 1024
total_packet = 1000

bufferSize = 4*packet_size


# Variables initialization

error = 0
total_throughput = 0
total_latene = 0
total_ratio = 0
total_stop_time_1 = 0
total_stop_time_3 = 0

# Creation of a packet
send_tab = zeros(packet_size, int)
for i in range(0, packet_size):
    send_tab[i] = i
data_size = (send_tab.size+8)*send_tab.itemsize
print "Data size : %d" % data_size
print "Tab : %s \n" % send_tab

# is_UDP = True
is_UDP = False

# Create a socket at client side
if is_UDP == True:
    UDP_Client_Socket_out = socket.socket(socket.AF_INET, socket.SOCK_DGRAM, socket.IPPROTO_UDP)
    UDP_Client_Socket_out.connect(server_address)

    UDP_Client_Socket_in = socket.socket(socket.AF_INET, socket.SOCK_DGRAM, socket.IPPROTO_UDP)
    UDP_Client_Socket_in.bind(client_address)

    connection = UDP_Client_Socket_out
    print("UDP client is running...")
else:
    TCP_Client_Socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
    TCP_Client_Socket.setsockopt(socket.IPPROTO_TCP, socket.TCP_NODELAY, 0)
    TCP_Client_Socket.connect(server_address)
    connection = TCP_Client_Socket
    print("TCP client is running...")

start_time_0 = time.clock()

for packet_number in range(0,total_packet):
    send_tab[0] = packet_number

    # Send packet to server
    start_time=time.clock()
    sent = connection.send(send_tab)
    if sent:
        stop_time_1 = time.clock() - start_time

    # Calculate throughput and ratio
    throughput = data_size / (stop_time_1 * 1000000)

    print "stop_time_1 \t%f" % stop_time_1

    total_throughput += throughput

stop_time_3 = (time.clock() - start_time_0)

print "Results : \n"
print "     Packet error : %d \n" % error
print "     Thoughput: %f Mo/s \n " % (total_throughput/total_packet)
print "     total_stop_time_1 : %f s    \n " % (total_stop_time_1/total_packet)
print "     stop_time_3 : %f \n" % stop_time_3

So, I have 3 questions about it :

  1. Is it normal to have some packets which are lost even if I do it as a local host?

  2. If yes, why?

  3. Will I have the same problem if I program it in C?

Valeriy
  • 1,365
  • 3
  • 18
  • 45
  • Why not? UDP datagrams can be lost any time. – user207421 Feb 15 '19 at 08:54
  • @user207421 : Even if I use client & server programs in the same computer? – Mehdi91 Feb 17 '19 at 12:20
  • Sure, why not? Where does it say that can't happen? – user207421 Feb 21 '19 at 14:16
  • I had the same problem when I chose too large udp packet sizes (2**16 - 64 bytes). Drastically reducing the packet size to around 1400 bytes resulted in zero packet loss. But I'm just encoding a video signal, not stress testing the tcp/ip stack – McSebi Aug 08 '23 at 10:53

1 Answers1

0

From your code it looks like you expect to receive the UDP packets in the same order you are sending them. I don't think you are loosing packets, but rather the order in which they are received by the server is not the expected one (which is normal for UDP).

Also, you should take into consideration that UDP does not guarantee neither the order or the receiving of packets so your program should take this into consideration.

I would refactor the code and add the tab into a list, then sort it and check for the gaps (at the end of transmission). Another way would be to send a reply from the server and make the check on the client (but this might increase the number if you deploy it on the Internet).

Mihai
  • 2,125
  • 2
  • 14
  • 16