I infer that since an ASP.NET web application is granted a fixed number of worker threads and I/O threads, which is governed by the <processModel>
settings in the web.config file, thus, for a large website with a number of users, there is no particular benefit of spinning new worker threads or using the threads out of the .NET ThreadPool to do tasks like logging errors or sending email, etc.
In other words, suppose that in my web application, for every request that comes in, the request handler executes 3 tasks in a row, namely Task A, followed by Task B, followed by Task C, and then returns the result, which is an HTML page.
However, out of the three tasks, Task B does not have any bearing on the resultant HTML. It might be a task such as logging a piece of text in a log file.
Then, if I moved Task B to another worker thread, while the CurrentThread
, which is also a worker thread from the ASP.NET pool of worker threads, will return faster and will result in a better response time for my application's user, the number of concurrent users that my application can service will be affected as Task B will need a new worker thread allocation from the same ASP.NET pool of worker threads.
Therefore, is my understand correct that in a Web application scenario: forking out new threads will affect performance positively but will affect scalability adversely. Thus, we must only write multi-threaded server side code if we have the luxury of scaling out by buying lots of hardware?
If not, then we are only trading off one for the other?