I am designing a non commercial open source client app which needs to download data of exactly 100 KB
from server on regular interval and show an alert in client app based on the data changes. Now I need to trade off between the user bandwidth
and download interval
.
Analysis,
- If I set the
interval = 1 hour
. That means within 1 month app will download30*24*100KB = 72MB
. - If I set the
interval = 30 mins
. That means within 1 month app will download30*48*100KB = 144MB
. - And so on.
Now, I am considering only the file size where in practice there will be some portion of bandwidth used for control flow apart from data flow. For downloading file of exactly 100 KB
from server, how much overhead bandwidth of control flow should I consider in my analysis for TCP communication? Is there any guideline/reference or research on that topic?
Assume, if 10KB
is used for control flow, total monthly usage will include 14.4MB extra data which needed to be identified in my analysis.
Note: (1) I am limited to analyse only the client app part. (2) No changes in server side can be done at that moment (i.e. pull based to push based, partial data change api etc. cannot be applied). (3) I am limited to download the file using TCP. (4) Although, that much granularity is not often be considered in practice, let's assume, for my case the analysis required to be that much granular that I need to know the data vs control bandwidth ratio.