Let's say we have 16x1GB hugepages available for DPDK. We want to use as much memory as possible for a single mempool of mbufs. How can we calculate the maximum number of packets that will lead to a successful rte_mempool creation, given the maximum packet size?
For simplicity assume both private data size and cache size is zero:
const uint16_t max_rx_pkt_size = 9216;
// We have 16GB of hugepage memory
const uint32_t hugepage_size_bytes = 16U * 1024 * 1024 * 1024;
// How to calculate the max number of packets that we can allocate?
const uint32_t num_packets = hugepage_size_bytes / max_rx_pkt_size;
rte_mempool *mp = rte_pktmbuf_pool_create("rx_packet_pool", num_packets,
0 /* Cache size */, 0 /* Private size */,
max_rx_pkt_size, rte_socket_id());
The above call to rte_pktmbuf_pool_create()
fails due to lack of memory, rte_errno is set to ENOMEM.
Clearly DPDK allocates some memory internally for its data structures, so we can't allocate a rte_mempool that occupies 100% of our hugepage memory.
The current workaround we are using is to reduce hugepage_size_bytes
before we calculate num_packets
:
hugepage_size_bytes -= (hugepage_size_bytes / 32);
Then we end up with a smaller value of num_packets
and rte_pktmbuf_pool_create()
succeeds.
However when we changed max_rx_pkt_size
to a different value, say 1460 then mempool allocation fails. This is not a good approach. Is there a way we can programmatically check (or at least estimate with certainty) the maximum amount of mbufs that can be allocated based on hugepage memory size?