I am trying to increase the receive buffer size for an UDP socket but the final size does not seem to be predictable :
LOG_INFO("UDP echo server default receive buffer size : " << rcv_buf << " bytes");
// increase default buffer sizes
rcv_buf *= 3;
LOG_INFO("trying to increase receive buffer size to : " << rcv_buf << " bytes");
if (!SockWrap::set_recv_buf_size(m_handle, sizeof(m_sockaddr_in), rcv_buf))
LOG_ERR("unable to set new receive buffer size");
// checking the new size after possible modifications if any
rcv_buf = SockWrap::recv_buf(m_handle, sizeof(m_sockaddr_in));
if (rcv_buf == -1) {
LOG_ERR("unable to read UDP echo server receive buffer size after modification");
} else {
LOG_INFO("UDP echo server new receive buffer size : " << rcv_buf << " bytes");
}
Wrappers functions are :
bool SockWrap::set_recv_buf_size(int fd, socklen_t len, int size)
{
// SO_RCVBUF option is an integer
int n = setsockopt(fd, SOL_SOCKET, SO_RCVBUF, &size, len);
if (n == -1) {
LOG_ERR("setsockopt : " << strerror(errno));
return false;
}
return true;
}
and
int SockWrap::recv_buf(int fd, socklen_t len)
{
// SO_RCVBUF option is an integer
int optval;
if (getsockopt(fd, SOL_SOCKET, SO_RCVBUF, &optval, &len) == -1) {
LOG_ERR("getsockopt : " << strerror(errno));
return -1;
} else
return optval;
}
output :
UDP echo server default receive buffer size : 212992 bytes
trying to increase receive buffer size to : 638976 bytes
UDP echo server new receive buffer size : 425984 bytes
I have check the limits of my system in /proc/sys/net/ipv4
:
cat udp_rmem_min
4096
cat udp_mem
186162 248216 372324
and in /proc/sys/net/core
cat rmem_max
212992
cat rmem_default
212992
So the first output seems pretty clear, the default recv buffer value is 212992 bytes which is defined by rmem_default
.
But then size is increased and suprisingly greater than rmem_max
but still not what i wanted.
Where this final value (425984 bytes) comes from ?
Is this value a maximum and does it depends on how much memory is currently used by the kernel ?
EDIT :
Following the answer, i have tested other values and it's seems even possible to set rmem_default
to be greater than rmem_max
:
echo 500000 > /proc/sys/net/core/rmem_default
cat /proc/sys/net/core/rmem_default
500000
Now before setsockopt
is called, getsockopt
returns (as always) rmem_default
, not rmem_default * 2
which is 500000.
But if i use setsockopt
to set the value to 500000 then getsocktop
returns rmem_max * 2
which is 425984.
So it's seems that using /proc
interface allows more control on the buffer size than setsockopt
.
What is the purpose of rmem_max
if rmem_default
can be greater ?
/* from kernel 5.10.63 net/core/sock.c */
case SO_RCVBUF:
/* Don't error on this BSD doesn't and if you think
* about it this is right. Otherwise apps have to
* play 'guess the biggest size' games. RCVBUF/SNDBUF
* are treated in BSD as hints
*/
__sock_set_rcvbuf(sk, min_t(u32, val, sysctl_rmem_max));
break;
and
static void __sock_set_rcvbuf(struct sock *sk, int val)
{
/* Ensure val * 2 fits into an int, to prevent max_t() from treating it
* as a negative value.
*/
val = min_t(int, val, INT_MAX / 2);
sk->sk_userlocks |= SOCK_RCVBUF_LOCK;
/* We double it on the way in to account for "struct sk_buff" etc.
* overhead. Applications assume that the SO_RCVBUF setting they make
* will allow that much actual data to be received on that socket.
*
* Applications are unaware that "struct sk_buff" and other overheads
* allocate from the receive buffer during socket buffer allocation.
*
* And after considering the possible alternatives, returning the value
* we actually used in getsockopt is the most desirable behavior.
*/
WRITE_ONCE(sk->sk_rcvbuf, max_t(int, val * 2, SOCK_MIN_RCVBUF));
}
But maybe this edit should be another (related) question.
Thank you.