In HTTP Live Streaming the files are split into fixed sized chunks for streaming. Whats the rational behind this? How is this better than having a single file and using offsets to retrieve the various chunks.
My rough ideas at the moment.
Splitting the files into multiple chunks reduces file seek time during streaming.
From what I understand file are stored as a persistent linked list on the HDD. Is this even true for modern file systems (such as NTFS, ext3) or do they use a more sophisticated data structure such as a balanced tree or hash maps to index the blocks of a file? Whats the run time complexity of seeking (using seekp, tellp, etc) in a file?