0

Here I have a promise that simply resolved.

let promise = new Promise((resolve,reject)=>{
    resolve("resolved");
});

The confusion for me starts when I use a Promise for return value in .then chain like this:

promise.then(resolve=>{
    console.log(resolve);
    return Promise.resolve(2);
}).then(resolve=>{
    console.log(resolve);
});

promise.then(resolve=>{
    console.log(resolve)
    return 3;
}).then(resolve=>{
    console.log(resolve);
});

The output of these chains is: 1 1 3 2 Which I expected to see: 1 1 2 3

But If I turn return Promise.resolve(2); to just return 2 like here:

promise.then(resolve=>{
    console.log(resolve);
    return 2;
}).then(resolve=>{
    console.log(resolve);
});

promise.then(resolve=>{
    console.log(resolve)
    return 3;
}).then(resolve=>{
    console.log(resolve);
});

I'll get the output that I thought I would get in the first place (1 1 2 3).

So does anyone here can explain why the output changes based on using and not using Promise.resolve() ?

BTW I'm asking this question just for pure ACADEMIC reason!

Od Chan
  • 68
  • 8
  • 1
    FYI, it's fine to ask why it does what it does, but please understand that when you have two independent promise chains, you should consider the order of execution between them to be indeterminate because if you have any real asynchronous operations in there, the order will be entirely indeterminate. So, if your code cares about the order, then you have to write the code to sequence or control things as needed. So, most of what you're asking here is entirely academic and irrelevant to real world asynchronous operations for which the order will be indeterminate. – jfriend00 Mar 29 '20 at 10:12
  • 1
    I posted that previous comment because questions like this are asked all the time and in the real world of asynchronous operations, the details of how the micro task queue works between two competing promise chains ends up not mattering at all. The sequencing will be at the whim of how long the actual asynchronous operations take to complete which is variable and not something you control. Put two random timers in each of your promise chains and then you will have a more accurate simulation of how real world promise chains work. You have no actual asynchronous operations. – jfriend00 Mar 29 '20 at 10:15

2 Answers2

1

The thens of Promises resolve during a microtask. Inside a .then, if you return a plain value, like 2 or 3, the next .then chained onto it will run the next time the call stack is clear. But if you return a Promise, it has to be unwrapped first before proceeding to the next .then.

In your first code, once the call stack is clear, the first microtasks run. One of them "unwraps" the Promise.resolve(2) and queues up the .then callback in the microtask queue. In contrast, the 3 doesn't need to be unwrapped, so its .then runs immediately at that point, without having to wait, logging 3.

The top task of the microtask queue is then the 2's .then, logging 2.

All this said, in real code, you shouldn't have to rely on this sort of timing, since it's a bit confusing - if it's an issue, best to re-structure the code so that it isn't something to worry about.

CertainPerformance
  • 356,069
  • 52
  • 309
  • 320
1

Because you are returning a new Promise in case 1, it will get resolved in the next tick.

Each time the micro task Q is searched the promises that are in the Q get resolved (not only promises, but this is relevant for this question). In case 1 you get Promise.resolve(2) getting resolved at the same time resolve=>{ console.log(resolve) return 3; } gets resolved.

Now you next micro Q has the enclosing promise of Promise.resolve(2) on the Q. This adds a delay between the two cases.

Radu Diță
  • 13,476
  • 2
  • 30
  • 34