1

I just implemented my first UDP server/client. The server is on localhost. I'm sending 64kb of data from client to server, which the server is supposed to send back. Then, the client checks how many of the 64kb are still intact and they all are. Always. What are the possible causes for this behaviour? I was expecting at least -some- dataloss.

client code: http://pastebin.com/5HLkfcqS server code: http://pastebin.com/YrhfJAGb

PS: A newbie in network programming here, so please don't be too harsh. I couldn't find an answer for my problem.

tsuby
  • 47
  • 1
  • 7

1 Answers1

3

The reason why you are not seeing any lost datagrams is that your network stack is simply not running into any trouble. Your localhost connection can easily cope with what you provide, a localhost connection is able to process several 100 megabyte of data per second on a decent CPU.

To see dropped datagrams you should increase the probability of interference. You have several opportunities:

  • increase the load on the network
  • busy your cpu with other tasks
  • use a "real" network and transfer data between real machines
  • run your code over a dsl line
  • set up a virtual machine and simulate network outages (Vmware Workstation is able to do so)

And this might be an interesting read: What would cause UDP packets to be dropped when being sent to localhost?

Community
  • 1
  • 1
Marged
  • 10,577
  • 10
  • 57
  • 99