Preface: This answer originally written in 2015 shows wrapping FileReader
in a promise. That's still a perfectly valid way to do the readAsDataURL
operation the question asked about, but if you were going to use readAsText
or readAsArrayBuffer
(in general, new code shouldn't use the older readAsBinaryString
), you'd want to use the File
object's built-in promise-based methods text
or arrayBuffer
instead (or possibly stream
if you want to do inline processing of the data as it flowed through), all of which are inherited from Blob
.
The nature of FileReader
is that you cannot make its operation synchronous.
I suspect you don't really need or want it to be synchronous, just that you want to get the resulting URLs correctly. The person suggesting using promises was probably right, but not because promises make the process synchronous (they don't), but because they provide standardized semantics for dealing with asynchronous operations (whether in parallel or in series):
Using promises, you'd start with a promise wrapper for readAsDataURL
(I'm using ES2015+ here, but you can convert it to ES5 with a promise library instead):
function readAsDataURL(file) {
return new Promise((resolve, reject) => {
const fr = new FileReader();
fr.onerror = reject;
fr.onload = () => {
resolve(fr.result);
}
fr.readAsDataURL(file);
});
}
Then you'd use the promise-based operations I describe in this answer to read those in parallel:
Promise.all(Array.prototype.map.call(inputFiles.files, readAsDataURL))
.then(urls => {
// ...use `urls` (an array) here...
})
.catch(error => {
// ...handle/report error...
});
...or in series:
let p = Promise.resolve();
for (const file of inputFiles.files) {
p = p.then(() => readAsDataURL(file).then(url => {
// ...use `url` here...
}));
}
p.catch(error => {
// ...handle/report error...
});
Inside an ES2017 async
function, you could use await
. It doesn't do much for the parallel version:
// Inside an `async` function
try {
const urls = await Promise.all(Array.prototype.map.call(inputFiles.files, readAsDataURL));
} catch (error) {
// ...handle/report error...
}
...but it makes the series version simpler and clearer:
// Inside an `async` function
try {
for (const file of inputFiles.files) {
const url = await readAsDataURL(file);
// ...use `url` here...
}
} catch (error) {
// ...handle/report error...
}
Without promises, you'd do this by keeping track of how many outstanding operations you have so you know when you're done:
const inputFiles = document.getElementsByTagName("input")[0];
inputFiles.onchange = () => {
const data = []; // The results
let pending = 0; // How many outstanding operations we have
// Schedule reading all the files (this finishes before the first onload
// callback is allowed to be executed). Note that the use of `let` in the
// `for` loop is important, `var` would not work correctly.
for (let index = 0; index < inputFiles.files.length; ++index) {
const file = inputFiles.files[index];
// Read this file, remember it in `data` using the same index
// as the file entry
const fr = new FileReader();
fr.onload = () => {
data[index] = fr.result;
--pending;
if (pending == 0) {
// All requests are complete, you're done
}
}
fr.readAsDataURL(file);
++pending;
});
};
Or if you want for some reason to read the files sequentially (but still asynchronously), you can do that by scheduling the next call only when the previous one is complete:
// Note: This assumes there is at least one file, if that
// assumption isn't valid, you'll need to add an up-front check
var inputFiles = document.getElementsByTagName("input")[0];
inputFiles.onchange = () => {
let index = 0;
readNext();
function readNext() {
const file = inputFiles.files[index++];
const fr = new FileReader();
fr.onload = () => {
// use fr.result here
if (index < inputFiles.files.length) {
// More to do, start loading the next one
readNext();
}
}
fr.readAsDataURL(file);
}
};