0

I am writting a toy project which is a web server.I am testing its concurrency now I wrote two version of tesing scripts: the first one:

reqs=["fuck /BAD/ HTTP/1.1\nHost: www.baidu.com\r\n\r\n","GET /GOOD/ HTTP/1.1\r\nHook: www.baidu.com\r\n\r\n"]
# 注意这种是每次一个socket,不能体现出并发
for i in range(10000):
    example_req=reqs[(i)%2]
    s=socket.socket(socket.AF_INET,socket.SOCK_STREAM)
    s.connect(("localhost",15441))
    s.send(example_req.encode("utf-8"))
    # print(example_req)
    print(s.recv(1024).decode('utf-8'))
    s.close()

this version create a socket,connect to the server,send message,receive response per iteration when I run this,both the server and client behave well the second version:

for i in xrange(numConnections):
    s = socket(AF_INET, SOCK_STREAM)
    s.connect((serverHost, serverPort))
    socketList.append(s)
...

this one is in different style,it creates socket and connect to server per iteration without send or recv,when the numConnections is more than
about 150:200,500,etc, the server can't receive all of them,but when I debug the client script,it suggests that it has created all sockets(connected) the server side snippet is:

while (1)
    {
        // select每次返回后,会将待监听的集合修改为ready_set
        ready_fds = read_fds;
        // 设置timeout为NULL,永远不会timeout,一直等到有readyfd
        Select(max_fd, &ready_fds, NULL, NULL, NULL);
        // 如果listenfd处于ready,说明可以accept
        if (FD_ISSET(listenfd, &ready_fds))
        {
            clientlen = sizeof(clientaddr);
            // proxy这里的connfd是malloc的,但是这里是单线程
            // 不知道有没有必要 CHECK
            connfd = Accept(listenfd, (SA *)&clientaddr, &clientlen);
            // 尚不清楚最后一个参数flags的作用 CHECK
            Getnameinfo((SA *)&clientaddr, clientlen, hostname, MAXBUF, port, MAXBUF, 0);
            printf("Accepted connection from (%s, %s)\n", hostname, port);
            // 将新加入的connfd加入到被监听的
            FD_SET(connfd, &read_fds);
            // 并修改max_fd(if necessary)
            if (connfd + 1 > max_fd)
            {
                max_fd = connfd + 1;
                // FOR DEBUG
                printf("maxfd: %d\n", max_fd);
            }
        }
        else
        {
            // 如果不是listenfd,说明已连接的socket可读
            for (int i = 0; i < max_fd; i++)
            {
                if (FD_ISSET(i, &ready_fds))
                {
                    process_request(i, read_fds);
                }
            }
        }
    }

and the listenfd is created with this:

setsockopt(listenfd, SOL_SOCKET, SO_REUSEADDR,        (const void *)&optval , sizeof(int));

specifically,when I run the server using gdb,I see that after established about 130 connections(which is much smaller than the number of connections that client lauched),the server can no longer detect data from the listenning fd, and the ready_fd set is like this:

ready_fds = {__fds_bits = {2305843009213693952, 0, 4, 
    0 <repeats 13 times>}}

the code is here:https://github.com/RedemptionC/ServerLiso

Red
  • 19
  • 8

0 Answers0