Background
I am attempting to stream a live feed of a remote computers' desktop to my application. To do this I am using connection-orientated (TCP) sockets, capturing a frame of the client's computer and sending it to the server.
My Research
I am sending a frame (screenshot) every 100 milliseconds (that's 10 FPS). Each frame is around 145kb, that means I need to send 1450kb a second (which equates to 1.4 megabytes, 11 mega bits a second).
My internet has a maximum download speed of 0.32 mega bits per second. Because I need to send 11 mega bits of data a second, this means I my internet is 10.6 megabits slower than what I need. So by my calculations, in order to stream the desktop efficiently I need each frame to be approximately 4.5kb (4608b + 20b TCP header) which is realistically, impossible given the current system, even when sending only updated parts of the desktop and compressing bitmaps.
Question
I am not sure that the system is limited exactly by the upload speed. I think this because 4.5kb is a ridiculously small size. I can stream my desktop perfectly smoothly using similar software (software such as Teamviewer, Join.me and Skype) and even though these software packages use far more intelligent protocols than me (good question here) I highly doubt that they are sending just 4.5kb each frame / desktop update.
So my question is ultimately; are my calculations at all accurate and why? My aim here is to decide what is an appropriate size for each frame so I can then work to reach that size and calculate the quality / interval for different speed connections. I am interested in any comments / answers that are helpful to my situation of course, but the answer I accept will be the one that answers my actual question.