0

I have a custom multi threaded http server(in c++) that:

1)accepts POSTS of media ts segments of different video streams
2)archives a bunch of them into a zip file(on disk)
3) and uploads the zip file to a preconfigured origin/content server(say after 100 segments have been archived to a zip file)

The problem is that each zip file is around ~100 MB and with high number of client POSTs(150 per second), uploading these zip files overwhelms the vsize/rss and crashes.(since uploading requires reading the zip file into memory)
Is there a memory aware /memory efficient way to ensure the upload threads can achieve maximum throughput without overwhelming the memory?
Some kind of dynamic rate limiting perhaps so that too many clients don't shoot up vsize?

Platform is Linux Ubuntu 14.04

zuselegacy
  • 84
  • 7
  • I'd suggest either getting more memory for your server or limiting the number of clients who can upload at the same time (or just add more swap space if performance is not a concern).. – Jesper Juhl May 21 '16 at 15:29
  • why do you need to maintain the whole zip file into the server memory while uploading it? Can't you just append while uploading? See use of HTML5 in this answer http://stackoverflow.com/questions/5053290/large-file-upload-though-html-form-more-than-2-gb – andreaplanet May 21 '16 at 16:17
  • @JesperJuhl is there a good way to dynamically limit clients instead of any static limits? – zuselegacy May 21 '16 at 19:53

1 Answers1

0

Answering my own question: these strategies worked for me: 1. limit no of concurrent uploads 2. Read the zip files in chunks and upload over the open http connection

zuselegacy
  • 84
  • 7