4

I want to use Huge Pages with memory-mapped files on Linux 3.13.

To get started, on Ubuntu I did this to allocate 10 huge pages:

sudo apt-get install hugepages
sudo hugeadm --pool-pages-min=2048K:10

Then I ran this test program:

#include <assert.h>
#include <fcntl.h>
#include <stdio.h>
#include <stdlib.h>
#include <sys/mman.h>
#include <sys/types.h>
#include <unistd.h>

int main(void)
{
    size_t size = 2 * 1024 * 1024; /* 1 huge page */

    int fd = open("foo.bar", O_RDWR|O_CREAT, 0666);
    assert(fd >= 0);
    int rc = ftruncate(fd, size);
    assert(rc == 0);

    void* hint = 0;
    int flags = MAP_SHARED | MAP_HUGETLB;
    void* data = mmap(hint, size, PROT_READ|PROT_WRITE, flags, fd, 0);
    if (data == MAP_FAILED)
        perror("mmap");
    assert(data != MAP_FAILED);
}

It always fails with EINVAL. If you change flags to MAP_PRIVATE|MAP_ANONYMOUS then it works, but of course it won't write anything to the file.

I also tried using madvise() after mmap() without MAP_HUGETLB:

    rc = madvise(data, size, MADV_HUGEPAGE);
    if (rc != 0)
        perror("madvise");
    assert(rc == 0);

This also fails (EINVAL) if MAP_ANONYMOUS is not used.

Is there any way to enable huge pages with memory-mapped files on disk?

To be clear, I am looking for a way to do this in C--I'm not asking for a solution to apply to existing executables (then the question would belong on SuperUser).

John Zwinck
  • 239,568
  • 38
  • 324
  • 436
  • I have been trying similar stuff - without luck. If you come up with a solution, please share :) BTW: Have you read this QA: http://stackoverflow.com/questions/30470972/using-mmap-and-madvise-for-huge-pages It doesn't solve my problem but it provides a link to some kernel documentation. I tried to follow that documentation - still without any luck but perhaps you can make something of it. – Support Ukraine May 19 '17 at 08:17
  • Were you able to figure out a way to do this? – zane Nov 19 '22 at 05:14
  • 1
    @Mehnaz: No, but I have not tried since 2017. – John Zwinck Nov 19 '22 at 07:03
  • Did you use any alternative? – zane Nov 21 '22 at 02:37
  • I don't know of any alternative apart from perhaps using a filesystem other than ext4. – John Zwinck Nov 21 '22 at 18:58

2 Answers2

4

It looks like the underlying filesystem you are using does not support memory-mapping files using huge pages.

For example, for ext4 this support is still under development as of January 2017, and not included in the kernel yet (as of May 19, 2017).

If you run a kernel with that patchset applied, do note that you need to enable huge page support in the filesystem mount options, for example adding huge=always to the fourth column in /etc/fstab for the filesystems desired, or using sudo mount -o remount,huge=always /mountpoint.

Nominal Animal
  • 38,216
  • 5
  • 59
  • 86
1

There is a confusion here: Huge pages can be used through the raw kernel interface or/and through a user space library (libhugetlbfs) and accompanying tools (e.g. hugeadm).

If you want to mmap() a memory zone into huge pages. You are using the "raw kernel interface". To make it, there is a recipe in this answer.

If you want to use the user space library (libhugetlbfs), get help from the manuals of the tools and the manual of the API.

Rachid K.
  • 4,490
  • 3
  • 11
  • 30
  • Is it possible to do a file-backed mmap with the MAP_HUGETLB flag? – zane Nov 21 '22 at 02:36
  • 1
    AFAIK, it is not possible. The huge pages are dedicated to reduce page faults and TLB misses using consecutive physical pages. This is typically used to make a dynamic memory allocator more efficient, map thread stacks into it or share big chunks of memory between processes. – Rachid K. Nov 21 '22 at 07:29