You can find the specifications of Promise.all
in section 25.4.4.1 of the ECMAScript 2015 Language Specification.
Your own implementation is indeed doing the right thing. The differences are in details:
The above specs state at point 25.4.4.1.1.r that then
is to be called on each of the promises. These calls happen synchronously (NB: not their callbacks). Whenever any of the promises resolves, a remainingElementsCount is decremented (see step 2.10). Whenever it gets to zero, the promise that was returned by Promise.all
is resolved (NB: synchronously!).
Now imagine you have an array of a million promises, and the first one takes longest to resolve, then your function will still have to perform 999999 awaits before the function returns, while the algorithm in the specs would already have processed the resolutions of those 999999 promises before the first one resolved, and will have little to do after the first promise finally resolves.
You can see this for instance in this polyfill/promise.js implementation (where the counting happens by incrementing):
shaka.polyfill.Promise.all = function(others) {
var p = new shaka.polyfill.Promise();
if (!others.length) {
p.resolve_([]);
return p;
}
// The array of results must be in the same order as the array of Promises
// passed to all(). So we pre-allocate the array and keep a count of how
// many have resolved. Only when all have resolved is the returned Promise
// itself resolved.
var count = 0;
var values = new Array(others.length);
var resolve = function(p, i, newValue) {
shaka.asserts.assert(p.state_ != shaka.polyfill.Promise.State.RESOLVED);
// If one of the Promises in the array was rejected, this Promise was
// rejected and new values are ignored. In such a case, the values array
// and its contents continue to be alive in memory until all of the Promises
// in the array have completed.
if (p.state_ == shaka.polyfill.Promise.State.PENDING) {
values[i] = newValue;
count++;
if (count == values.length) {
p.resolve_(values);
}
}
};
var reject = p.reject_.bind(p);
for (var i = 0; i < others.length; ++i) {
if (others[i].then) {
others[i].then(resolve.bind(null, p, i), reject);
} else {
resolve(p, i, others[i]);
}
}
return p;
};
But be aware that browser implementations differ. The above polyfill is just one of the possible implementations.
Note that your function is not "Running the promises serially". The promises are "running"* whether you do something with them or not: they do their job as soon as you construct them.
The only things that get serialised are the moments you start to look at (i.e. await) the respective promise resolutions. The specs seem to hint that the implementation should listen to resolve callbacks of all promises from the start. This you cannot implement with await
in a loop (well, you could, but then you would need to call the async
function repeatedly, once per promise, which would not give you any benefit any more of using await
over then
, as you would need to apply then
on the promises returned by the async
function).
Then there are some other (obvious) differences in the area of the this
and argument validation. Notably the ECMA specs state:
The all
function requires its this
value to be a constructor function that supports the parameter conventions of the Promise
constructor.
* Promises don't really "run", as promises are objects, not functions. What may be running is some asynchronous task that was initiated when the promise object was created. The promise constructor callback can potentially call an asynchronous API (like setTimeout
, fetch
, ...) which may lead to an asynchronous call of resolve
. It is better to call this intermediate state as a promise that is "pending" (instead of "running")