If the actual full CPU usage is 1 second per request, then 1000 requests will take 1000 seconds to process. There is no magic bullet around that. So, to process 1000/sec, you would need 1000 CPUs (obviously spread across a number of clustered servers).
But, if the 1 second response time is actually just the total time of the request and much of the time node.js may be waiting for a database lookup or a file operation (both of which are asynchronous), then the CPU is sitting idle much of the time and while one request is waiting for some I/O operation to respond, it can be working on another request. And, then when that one is waiting for I/O, it can start up another request. In that way, a single thread can have many different requests that are in-flight at the same time (assuming these requests are doing some async I/O as part of their processing). In this way, node.js interleaves multiple different operations, but all with one single thread.
This is the big scale advantage of the design of node.js. Rather than needing a fairly heavyweight thread for each concurrent request, it can actually be servicing a number of requests with a single thread. Instead of using OS-level scheduling and multiple stack frames to time slice OS-level threads, node.js just yields control to another event in the event queue whenever one piece of JS returns control back to the system because it is waiting for an I/O response event to come back.
Please tell me how is it possible to complete all request in only one second by singlethread
This would only be possible if the actual CPU time for any given request was less than 1/1000th of a second. Otherwise, you will have to involve more than one CPU in order to process 1000 requests/sec. You may also need more than one network card too because you're talking about being able to read a request and send a response all in less than 1ms. Not likely with a single network card.