9

I wrote a simple program to restrict it's data size to 65Kb and to verify the same i am allocating a dummy memory of more than 65Kb and logically if i am doing all correct (as below) malloc call should fail, isn't it?

#include <sys/resource.h>
#include <stdio.h>
#include <stdlib.h>
#include <errno.h>

int main (int argc, char *argv[])
{
  struct rlimit limit;


  /* Get max data size . */
  if (getrlimit(RLIMIT_DATA, &limit) != 0) {
    printf("getrlimit() failed with errno=%d\n", errno);
    return 1;
  }

  printf("The soft limit is %lu\n", limit.rlim_cur);
  printf("The hard limit is %lu\n", limit.rlim_max);

  limit.rlim_cur = 65 * 1024;
  limit.rlim_max = 65 * 1024;

  if (setrlimit(RLIMIT_DATA, &limit) != 0) {
    printf("setrlimit() failed with errno=%d\n", errno);
    return 1;
  }

  if (getrlimit(RLIMIT_DATA, &limit) != 0) {
    printf("getrlimit() failed with errno=%d\n", errno);
    return 1;
  }

  printf("The soft limit is %lu\n", limit.rlim_cur);
  printf("The hard limit is %lu\n", limit.rlim_max);
  system("bash -c 'ulimit -a'");
    int *new2 = NULL;
    new2 = malloc(66666666);
    if (new2 == NULL)
    {
        printf("malloc failed\n");
        return;
    }
    else
    {
        printf("success\n");
    }

  return 0;
}

Surprisingly, the ouput is something like this -

The soft limit is 4294967295
The hard limit is 4294967295
The soft limit is 66560
The hard limit is 66560
core file size          (blocks, -c) 0
data seg size           (kbytes, -d) 65
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 14895
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 1024
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 14895
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited
success

Am i doing wrong in any way? Please drop your inputs. Thanks!

gerrit
  • 24,025
  • 17
  • 97
  • 170
pa1
  • 778
  • 3
  • 11
  • 26
  • 1
    The bare return after the "malloc failed" message should be fixed to return 1 (or some non-zero value). – Tom Karzes Dec 28 '15 at 06:27
  • 1
    I'm running ubntu linux 14.04 on amd64 with 8 gigs of ram, and compiling using gcc, with parameters: -Wall -Wextra -pedantic -std-c99 . This results in the compiler outputting the following three warnings. 1) line 41:9: warning: 'return' with no value, in function returning not-void 2) unused parameter argc 3) unused parameter argv. When asking a question about a run time problem, always post code that cleanly compiles. – user3629249 Dec 28 '15 at 06:48

2 Answers2

6

From the setrlimit man page:

RLIMIT_DATA

The maximum size of the process's data segment (initialized data, uninitialized data, and heap). This limit affects calls to brk(2) and sbrk(2), which fail with the error ENOMEM upon encountering the soft limit of this resource.

Specifically, that resource does not apply to memory obtained via mmap. Internally malloc uses various mechanisms for obtaining new memory. In this case you will find that it used mmap and not sbrk or brk. You can verify this by dumping the system calls from your program with strace.

To achieve what you want use the RLIMIT_AS resource instead.

kaylum
  • 13,833
  • 2
  • 22
  • 31
  • There's apparently a Linux kernel patch that's intended to address this, but apparently it hasn't made it into the kernel yet (at least, not the one OP is running, or the one I'm running). Here's a description along with the patch: [RLIMIT_DATA patch](http://lkml.iu.edu/hypermail/linux/kernel/0707.1/0675.html) – Tom Karzes Dec 28 '15 at 06:40
  • @kaylum Exactly what i needed! and yes you are correct in this case memory is allocated by 'mmap'. I verified this by malloc_stats() lib call. output of the same was something like this - Arena 0: system bytes = 0 in use bytes = 0 Total (incl. mmap): system bytes = 66670592 in use bytes = 66670592 max mmap regions = 1 max mmap bytes = 66670592 Now i am curious to know about mmap, any inputs or links for the same? :) – pa1 Dec 28 '15 at 07:20
  • @Coder Depends on what you want to know about `mmap`. But certainly the [mmap man page](http://linux.die.net/man/2/mmap) is the place to start if you haven't read it already. – kaylum Dec 28 '15 at 07:21
1

after correcting the problems with the compiling of the code.

This is the code:

#include <sys/resource.h>
#include <stdio.h>
#include <stdlib.h>
#include <errno.h>

int main ( void )
{
  struct rlimit limit;


  /* Get max data size . */
  if (getrlimit(RLIMIT_DATA, &limit) != 0) {
    printf("getrlimit() failed with errno=%d\n", errno);
    exit( EXIT_FAILURE );
  }

  printf("The soft limit is %lu\n", limit.rlim_cur);
  printf("The hard limit is %lu\n", limit.rlim_max);

  limit.rlim_cur = 65 * 1024;
  limit.rlim_max = 65 * 1024;

  if (setrlimit(RLIMIT_DATA, &limit) != 0)
  {
    printf("setrlimit() failed with errno=%d\n", errno);
    exit( EXIT_FAILURE );
  }

  if (getrlimit(RLIMIT_DATA, &limit) != 0)
  {
    printf("getrlimit() failed with errno=%d\n", errno);
    exit( EXIT_FAILURE );
  }

  printf("The soft limit is %lu\n", limit.rlim_cur);
  printf("The hard limit is %lu\n", limit.rlim_max);
  system("bash -c 'ulimit -a'");

    int *new2 = NULL;
    new2 = malloc(66666666);

    if (new2 == NULL)
    {
        printf("malloc failed\n");
        exit( EXIT_FAILURE );
    }
    else
    {
        printf("success\n");
    }

  return 0;
}

and here is the output:

The soft limit is 18446744073709551615
The hard limit is 18446744073709551615
The soft limit is 66560
The hard limit is 66560
bash: xmalloc: .././variables.c:2307: cannot allocate 48 bytes (16384 bytes allocated)
success

which indicates the modification to the rlimit works, the system call was successful, the bash command failed, and the malloc was successful.

Multiple runs of the same code always output the exact same values so no permanent change to the rlimit value

after running the above code several times, while leaving each terminal window open, then running the bash command in yet another terminal window resulted in the following:

core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 54511
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 1024
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 54511
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited

then running the code in yet another terminal then running the bash command in the same terminal output the exact same output values.

Therefore, I suspect the code is taking the wrong approach to limiting the amount of memory available.

user3629249
  • 16,402
  • 1
  • 16
  • 17
  • 2
    This doesn't address the question, which is why setting RLIMIT_DATA fails to limit the data segment size. If it were working, malloc would fail, which is what OP was expecting. – Tom Karzes Dec 28 '15 at 06:37
  • it does point out that 1) the change is transient rather than permanent 2) the wrong approach is being taken. 3) due to `virtual memory` and `memory paging`, the reduction of the actual available ram to a program will, at most, make the program run slower as the smaller, available memory results in more `page fault` events but is otherwise of little interest. 4) `malloc` does not require the allocated memory to actually be allocated in ram until/ when it is actually being used and even those pages only while it is actually being accessed – user3629249 Dec 28 '15 at 06:57
  • The important point is that RLIMIT_DATA doesn't have the desired effect in current Linux kernels, and OP should use RLIMIT_AS instead. See keylum's answer and explanation. – Tom Karzes Dec 28 '15 at 07:00