I'm trying to allocate a DMA buffer for a HPC workload. It requires 64GB of buffer space. In between computation, some data is offloaded to a PCIe card. Rather than copy data into a bunch of dinky 4MB buffers given by pci_alloc_consistent, I would like to just create 64 1GB buffers, backed by 1GB HugePages.
Some background info: kernel version: CentOS 6.4 / 2.6.32-358.el6.x86_64 kernel boot options: hugepagesz=1g hugepages=64 default_hugepagesz=1g
relevant portion of /proc/meminfo: AnonHugePages: 0 kB HugePages_Total: 64 HugePages_Free: 64 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 1048576 kB DirectMap4k: 848 kB DirectMap2M: 2062336 kB DirectMap1G: 132120576 kB
I can mount -t hugetlbfs nodev /mnt/hugepages. CONFIG_HUGETLB_PAGE is true. MAP_HUGETLB is defined.
I have read some info on using libhugetlbfs to call get_huge_pages() in user space, but ideally this buffer would be allocated in kernel space. I tried calling do_mmap() with MAP_HUGETLB but it didn't seem to change the number of free hugepages, so I don't think it was actually backing the mmap with huge pages.
So I guess what I'm getting at, is there any way I can map a buffer to a 1GB HugePage in kernel space, or does it have to be done in user space? Or if anyone knows of any other way I can get an immense (1-64GB) amount of contiguous physical memory available as a kernel buffer?