I have always fully loaded binary files and then gone on to scan the content blocks within. This was fine for anything up to say 500K. I'm now looking at scanning files that are as large as 1G or higher. (Client side)
Loading files greater then 5mb (as large as 1G) is not great, and I would like to move to a process where as the file is loading I can grab the blocks of code and process it. The file format is made up of blocks that have a size in it. So I would be able to grab the header, the first block and do parsing as the file loads.
If anyone knows where i can find some good examples of code like this working, or useful texts I can read, I would be very grateful.
My current code for loading is as follows. Jquery change on an input box, calls another function to load the file into memory, then processes it. The scanMyFile(buffer) is the function where the arraybuffer is then been sent to do its work identifying everything in the file.
$("#myfile").change(function (e) {
try {
readMyFile(e);
} catch (error) {
alert("Found this error : " + error);
}
});
function readMyFile(evt) {
var f = evt.target.files[0];
if (f) {
var r = new FileReader();
r.onload = function (e) {
var buffer = r.result;
scanMyFile(buffer);
};
r.readAsArrayBuffer(f);
} else {
alert("Failed to load file");
}
}