74

How does a web server handle multiple incoming requests at the same time on a single port(80)?

Example : At the same time 300k users want to see an image from www.abcdef.com which is assigned IP 10.10.100.100 and port 80. So how can www.abcdef.com handle this incoming users' load?

Can one server (which is assigned with IP 10.10.100.100) handle this vast amount of incoming users? If not, then how can one IP address be assigned to more than one server to handle this load?

General Grievance
  • 4,555
  • 31
  • 31
  • 45
  • There's a great answer [here](https://stackoverflow.com/a/29045432/575530) to the similar question "How does Port Number really work in TCP?" – dumbledad Aug 30 '18 at 09:41
  • What helped me even better, was the detailed explanation from network engineering, from here: https://networkengineering.stackexchange.com/a/39526 – Manohar Reddy Poreddy Nov 16 '19 at 09:00

5 Answers5

36

A port is just a magic number. It doesn't correspond to a piece of hardware. The server opens a socket that 'listens' at port 80 and 'accepts' new connections from that socket. Each new connection is represented by a new socket whose local port is also port 80, but whose remote IP:port is as per the client who connected. So they don't get mixed up. You therefore don't need multiple IP addresses or even multiple ports at the server end.

user207421
  • 305,947
  • 44
  • 307
  • 483
  • 4
    hope no one gets confused with accepted answer, considering the fact the people who searches answer like this are not pretty much aware of the difference between the two aspect and are new to networking world – Ankur Anand May 10 '18 at 16:45
  • Thanks for the answer. What happens if the same client connects to the same server, i.e. if the remote ip:port is same as well for two connections, how would the server know which connection is which? – Kraken Mar 06 '22 at 09:58
  • @Kraken The remote port *can't* be the same for both connections. The TIME-WAIT state ensures that. – user207421 May 28 '23 at 10:22
31

From tcpipguide

This identification of connections using both client and server sockets is what provides the flexibility in allowing multiple connections between devices that we take for granted on the Internet. For example, busy application server processes (such as Web servers) must be able to handle connections from more than one client, or the World Wide Web would be pretty much unusable. Since the connection is identified using the client's socket as well as the server's, this is no problem. At the same time that the Web server maintains the connection mentioned just above, it can easily have another connection to say, port 2,199 at IP address 219.31.0.44. This is represented by the connection identifier:

(41.199.222.3:80, 219.31.0.44:2199). 

In fact, we can have multiple connections from the same client to the same server. Each client process will be assigned a different ephemeral port number, so even if they all try to access the same server process (such as the Web server process at 41.199.222.3:80), they will all have a different client socket and represent unique connections. This is what lets you make several simultaneous requests to the same Web site from your computer.

Again, TCP keeps track of each of these connections independently, so each connection is unaware of the others. TCP can handle hundreds or even thousands of simultaneous connections. The only limit is the capacity of the computer running TCP, and the bandwidth of the physical connections to it—the more connections running at once, the more each one has to share limited resources.

a.m.
  • 2,083
  • 1
  • 16
  • 22
  • 9
    Poor quality citation, totally confused between sockets and ports. Connections are identified by IP address and *ports,* not by sockets. – user207421 Oct 13 '15 at 04:55
  • 1
    @AnshumanKumar A TCP socket is a combination of *four* elements, or five if you count the protocol. – user207421 Feb 12 '20 at 08:50
12

TCP Takes care of client identification
As a.m. said, TCP takes care of the client identification, and the server only sees a "socket" per client.
Say a server at 10.10.100.100 listens to port 80 for incoming TCP connections (HTTP is built over TCP). A client's browser (at 10.9.8.7) connects to the server using the client port 27143. The server sees: "the client 10.9.8.7:27143 wants to connect, do you accept?". The server app accepts, and is given a "handle" (a socket) to manage all communication with this client, and the handle will always send packets to 10.9.8.7:27143 with the proper TCP headers.

Packets are never simultaneous
Now, physically, there is generally only one (or two) connections linking the server to internet, so packets can only arrive in sequential order. The question becomes: what is the maximum throughput through the fiber, and how many responses can the server compute and send in return. Other than CPU time spent or memory bottlenecks while responding to requests, the server also has to keep some resources alive (at least 1 active socket per client) until the communication is over, and therefore consume RAM. Throughput is achieved via some optimizations (not mutually-exclusive): non-blocking sockets (to avoid pipelining/socket latencies), multi-threading (to use more CPU cores/threads).

Improving request throughput further: load balancing
And last, the server on the "front-side" of websites generally do not do all the work by themselves (especially the more complicated stuff, like database querying, calculations etc.), and defer tasks or even forward HTTP requests to distributed servers, while they keep on handling trivially (e.g. forwarding) as many requests per second as they can. Distribution of work over several servers is called load-balancing.

maxbc
  • 949
  • 1
  • 8
  • 18
  • There is no 'do you want to accept?'. *TCP** accepts, before the application has any saying in it whatsoever. – user207421 Feb 12 '19 at 10:48
5

1) How does a web server handle multiple incoming requests at the same time on a single port(80)
==> a) one instance of the web service( example: spring boot micro service) runs/listens in the server machine at port 80.
b) This webservice(Spring boot app) needs a servlet container like mostly tomcat.
This container will have thread pool configured.
c) when ever request come from different users simultaneously, this container will
assign each thread from the pool for each of the incoming requests.
d) Since the server side web service code will have beans(in case java) mostly
singleton, each thread pert aining to each request will call the singleton API's
and if there is a need for Database access , then synchronization of these
threads is needed which is done through the @transactional annotation. This
annotation synchronizes the database operation.

2) Can one server (which is assigned with IP 10.10.100.100) handle this vast amount of incoming users?
If not, then how can one IP address be assigned to more than one server to handle this load?
==> This will taken care by loadbalancer along with routetable

-1

answer is: virtual hosts, in HTTP Header is name of domain so the web server know which files run or send to client

Muflix
  • 6,192
  • 17
  • 77
  • 153