I know some of the probable reasons for this by doing my own R&D but I am still confused.
My application sends a small image of max. 2000Kbs to my server. There it draws this image to my canvas control. I use udp because if i use tcp the image appears 'jumpy' because the tcp does more work than udp and sends a ack back to my client. This is why I use udp. Indeed for this type of scenario i can see it it sometimes recommended to use udp. I am not bothered if all images does not make it to my server you see. Fire and forget is what I need.
This all works well on my own LAN. Recently, I went away (before COVID!) to a remote Welsh cottage. The internet there is not that good. i thought it would be a good idea to stress test my application. The image never arrived at my server. So, playing around, i reduced the quality of the mages so that I was only send on average 1000Kbs for each udp packet. The images started to get through. The quality of the image was not that great but I can live with that.
So, I thought if I could detect poor udp transmission rates on the current LAN I could automatically reduce the image quality of my image. So, I googled this. All I could was that I could check the MTU on that LAN. But it also says you should not extend beyond 512Kbs. But it also says the maximum udp packet can be 65507. But then it says the chances of a udp packet getting though at that size is remote.
So really confused. Like i said I can handle random missed images. But I want to increase the chance of most of the images getting through. If MTU only gives me 512kb to to go by are there any other metrics i can use?
Thanks