252

I have a process in Linux that's getting a segmentation fault. How can I tell it to generate a core dump when it fails?

kenorb
  • 155,785
  • 88
  • 678
  • 743
Nathan Fellman
  • 122,701
  • 101
  • 260
  • 319

13 Answers13

276

This depends on what shell you are using. If you are using bash, then the ulimit command controls several settings relating to program execution, such as whether you should dump core. If you type

ulimit -c unlimited

then that will tell bash that its programs can dump cores of any size. You can specify a size such as 52M instead of unlimited if you want, but in practice this shouldn't be necessary since the size of core files will probably never be an issue for you.

In tcsh, you'd type

limit coredumpsize unlimited
Eli Courtwright
  • 186,300
  • 67
  • 213
  • 256
  • I am sorry, but is this really answer your question? you asked how to generate, but it say how to set the limits – Baiyan Huang Aug 09 '11 at 05:30
  • 26
    @lzprgmr: To clarify: the reason why core dumps are not generated by default is that the limit is not set and/or set to 0, which prevents the core from being dumped. By setting a limit of unlimited, we guarantee that core dumps can always be generated. – Eli Courtwright Aug 09 '11 at 12:30
  • 6
    This [link](http://www.randombugs.com/linux/core-dumps-linux.html) goes deeper and gives some more options to enable generation of core dumps in linux. The only drawback is that some commands/settings are left unexplained. – Salsa Aug 31 '11 at 19:45
  • 6
    On bash 4.1.2(1)-release limits such as 52M cannot be specified, resulting in a invalid number error message. The man page tells that "Values are in 1024-byte increments". – a1an Sep 11 '12 at 12:02
  • 4
    Well I had a "small" OpenGL project, that once did some weird thing, and caused X-server crash. When I logged back, I saw a cute little 17 GB core file (on a 25 GB partition). It's definitely a good idea to keep the core file's size limited :) – IceCool Sep 08 '13 at 15:48
  • I have a question. I don't want to set mine to unlimited. How do I know how large of a coredump should allow? – PolarisUser Aug 22 '14 at 16:01
  • 1
    @PolarisUser: If you wanted to make sure your partition doesn't get eaten, I recommend setting a limit of something like 1 gig. That should be big enough to handle any reasonable core dump, while not threatening to use up all of your remaining hard drive space. – Eli Courtwright Aug 22 '14 at 16:51
  • 1
    I want to echo setting a limit for coredumpsize, as someone who just had to clean up a couple hundred 20G core dumps. – JSybrandt Jan 16 '18 at 18:50
  • attention: it is not persisted after login user quits, at least on CentOS, you have to edit /etc/security/limits.conf if you want so. – Imskull Jan 29 '18 at 03:37
  • 1
    and remember to put this in in .bashrc so that you wont need to do this all the time. – BreakBadSP Jan 14 '19 at 07:28
69

As explained above the real question being asked here is how to enable core dumps on a system where they are not enabled. That question is answered here.

If you've come here hoping to learn how to generate a core dump for a hung process, the answer is

gcore <pid>

if gcore is not available on your system then

kill -ABRT <pid>

Don't use kill -SEGV as that will often invoke a signal handler making it harder to diagnose the stuck process

tcooc
  • 20,629
  • 3
  • 39
  • 57
George Co
  • 961
  • 6
  • 9
  • 2
    I think it's far more likely that `-ABRT` will invoke a signal handler than `-SEGV`, as an abort is more likely to be recoverable than a segfault. (If you handle a segfault, normally it'll just trigger again as soon as your handler exits.) A better choice of signal for generating a core dump is `-QUIT`. – celticminstrel Feb 25 '20 at 21:02
47

To check where the core dumps are generated, run:

sysctl kernel.core_pattern

or:

cat /proc/sys/kernel/core_pattern

where %e is the process name and %t the system time. You can change it in /etc/sysctl.conf and reloading by sysctl -p.

If the core files are not generated (test it by: sleep 10 & and killall -SIGSEGV sleep), check the limits by: ulimit -a.

If your core file size is limited, run:

ulimit -c unlimited

to make it unlimited.

Then test again, if the core dumping is successful, you will see “(core dumped)” after the segmentation fault indication as below:

Segmentation fault: 11 (core dumped)

See also: core dumped - but core file is not in current directory?


Ubuntu

In Ubuntu the core dumps are handled by Apport and can be located in /var/crash/. However, it is disabled by default in stable releases.

For more details, please check: Where do I find the core dump in Ubuntu?.

macOS

For macOS, see: How to generate core dumps in Mac OS X?

kenorb
  • 155,785
  • 88
  • 678
  • 743
  • 5
    For Ubuntu, to quickly revert to normal behavior (dumping a core file in the current directory), simply stop the apport service with "sudo service apport stop". Also note that if you are running within docker, that setting is controlled on the host system and not within the container. – Digicrat Dec 19 '17 at 00:37
  • Instead of disabling apport every time it could be more lasting just to *uninstall* apport (ignoring the recommendation dependency) since the service adds no value for developers. – Marcel May 25 '22 at 17:13
30

What I did at the end was attach gdb to the process before it crashed, and then when it got the segfault I executed the generate-core-file command. That forced generation of a core dump.

Nathan Fellman
  • 122,701
  • 101
  • 260
  • 319
19

Maybe you could do it this way, this program is a demonstration of how to trap a segmentation fault and shells out to a debugger (this is the original code used under AIX) and prints the stack trace up to the point of a segmentation fault. You will need to change the sprintf variable to use gdb in the case of Linux.

#include <stdio.h>
#include <signal.h>
#include <stdlib.h>
#include <stdarg.h>

static void signal_handler(int);
static void dumpstack(void);
static void cleanup(void);
void init_signals(void);
void panic(const char *, ...);

struct sigaction sigact;
char *progname;

int main(int argc, char **argv) {
    char *s;
    progname = *(argv);
    atexit(cleanup);
    init_signals();
    printf("About to seg fault by assigning zero to *s\n");
    *s = 0;
    sigemptyset(&sigact.sa_mask);
    return 0;
}

void init_signals(void) {
    sigact.sa_handler = signal_handler;
    sigemptyset(&sigact.sa_mask);
    sigact.sa_flags = 0;
    sigaction(SIGINT, &sigact, (struct sigaction *)NULL);

    sigaddset(&sigact.sa_mask, SIGSEGV);
    sigaction(SIGSEGV, &sigact, (struct sigaction *)NULL);

    sigaddset(&sigact.sa_mask, SIGBUS);
    sigaction(SIGBUS, &sigact, (struct sigaction *)NULL);

    sigaddset(&sigact.sa_mask, SIGQUIT);
    sigaction(SIGQUIT, &sigact, (struct sigaction *)NULL);

    sigaddset(&sigact.sa_mask, SIGHUP);
    sigaction(SIGHUP, &sigact, (struct sigaction *)NULL);

    sigaddset(&sigact.sa_mask, SIGKILL);
    sigaction(SIGKILL, &sigact, (struct sigaction *)NULL);
}

static void signal_handler(int sig) {
    if (sig == SIGHUP) panic("FATAL: Program hanged up\n");
    if (sig == SIGSEGV || sig == SIGBUS){
        dumpstack();
        panic("FATAL: %s Fault. Logged StackTrace\n", (sig == SIGSEGV) ? "Segmentation" : ((sig == SIGBUS) ? "Bus" : "Unknown"));
    }
    if (sig == SIGQUIT) panic("QUIT signal ended program\n");
    if (sig == SIGKILL) panic("KILL signal ended program\n");
    if (sig == SIGINT) ;
}

void panic(const char *fmt, ...) {
    char buf[50];
    va_list argptr;
    va_start(argptr, fmt);
    vsprintf(buf, fmt, argptr);
    va_end(argptr);
    fprintf(stderr, buf);
    exit(-1);
}

static void dumpstack(void) {
    /* Got this routine from http://www.whitefang.com/unix/faq_toc.html
    ** Section 6.5. Modified to redirect to file to prevent clutter
    */
    /* This needs to be changed... */
    char dbx[160];

    sprintf(dbx, "echo 'where\ndetach' | dbx -a %d > %s.dump", getpid(), progname);
    /* Change the dbx to gdb */

    system(dbx);
    return;
}

void cleanup(void) {
    sigemptyset(&sigact.sa_mask);
    /* Do any cleaning up chores here */
}

You may have to additionally add a parameter to get gdb to dump the core as shown here in this blog here.

0x6773
  • 1,116
  • 1
  • 14
  • 33
t0mm13b
  • 34,087
  • 8
  • 78
  • 110
17

There are more things that may influence the generation of a core dump. I encountered these:

  • the directory for the dump must be writable. By default this is the current directory of the process, but that may be changed by setting /proc/sys/kernel/core_pattern.
  • in some conditions, the kernel value in /proc/sys/fs/suid_dumpable may prevent the core to be generated.

There are more situations which may prevent the generation that are described in the man page - try man core.

Zsolt Botykai
  • 50,406
  • 14
  • 85
  • 110
mlutescu
  • 171
  • 1
  • 2
11

For Ubuntu 14.04

  1. Check core dump enabled:

    ulimit -a
    
  2. One of the lines should be :

    core file size          (blocks, -c) unlimited
    
  3. If not :

    gedit ~/.bashrc and add ulimit -c unlimited to end of file and save, re-run terminal.

  4. Build your application with debug information :

    In Makefile -O0 -g

  5. Run application that create core dump (core dump file with name ‘core’ should be created near application_name file):

    ./application_name
    
  6. Run under gdb:

    gdb application_name core
    
josch
  • 6,716
  • 3
  • 41
  • 49
mrgloom
  • 20,061
  • 36
  • 171
  • 301
  • In Step 3, How to 're-run' the terminal? Do you mean reboot? – Naveen Jun 30 '17 at 14:34
  • @Naveen no, just close terminal and open new one, also seems you can just put `ulimit -c unlimited` in terminal for temporary solution, because only editing `~/.bashrc` require terminal restrart to changes make effect. – mrgloom Jun 30 '17 at 15:59
10

In order to activate the core dump do the following:

  1. In /etc/profile comment the line:

    # ulimit -S -c 0 > /dev/null 2>&1
    
  2. In /etc/security/limits.conf comment out the line:

    *               soft    core            0
    
  3. execute the cmd limit coredumpsize unlimited and check it with cmd limit:

    # limit coredumpsize unlimited
    # limit
    cputime      unlimited
    filesize     unlimited
    datasize     unlimited
    stacksize    10240 kbytes
    coredumpsize unlimited
    memoryuse    unlimited
    vmemoryuse   unlimited
    descriptors  1024
    memorylocked 32 kbytes
    maxproc      528383
    #
    
  4. to check if the corefile gets written you can kill the relating process with cmd kill -s SEGV <PID> (should not be needed, just in case no core file gets written this can be used as a check):

    # kill -s SEGV <PID>
    

Once the corefile has been written make sure to deactivate the coredump settings again in the relating files (1./2./3.) !

Kevin Panko
  • 8,356
  • 19
  • 50
  • 61
Edgar Jordi
  • 101
  • 1
  • 2
5

Ubuntu 19.04

All other answers themselves didn't help me. But the following sum up did the job

Create ~/.config/apport/settings with the following content:

[main]
unpackaged=true

(This tells apport to also write core dumps for custom apps)

check: ulimit -c. If it outputs 0, fix it with

ulimit -c unlimited

Just for in case restart apport:

sudo systemctl restart apport

Crash files are now written in /var/crash/. But you cannot use them with gdb. To use them with gdb, use

apport-unpack <location_of_report> <target_directory>

Further information:

  • Some answers suggest changing core_pattern. Be aware, that that file might get overwritten by the apport service on restarting.
  • Simply stopping apport did not do the job
  • The ulimit -c value might get changed automatically while you're trying other answers of the web. Be sure to check it regularly during setting up your core dump creation.

References:

DarkTrick
  • 2,447
  • 1
  • 21
  • 39
4

By default you will get a core file. Check to see that the current directory of the process is writable, or no core file will be created.

Mark Harrison
  • 297,451
  • 125
  • 333
  • 465
  • 4
    By "current directory of the process" do you mean the $cwd at the time the process was run? ~/abc> /usr/bin/cat def if cat crashes, is the current directory in question ~/abc or /usr/bin? – Nathan Fellman Apr 30 '09 at 07:52
  • 5
    ~/abc. Hmm, comments have to be 15 characters long! – Mark Harrison May 01 '09 at 14:56
  • 5
    This would be the current directory at the time of the SEGV. Also, processes running with a different effective user and/or group than the real user/group will not write core files. – Darron Jan 26 '10 at 14:02
4

Better to turn on core dump programmatically using system call setrlimit.

example:

#include <sys/resource.h>

bool enable_core_dump(){    
    struct rlimit corelim;

    corelim.rlim_cur = RLIM_INFINITY;
    corelim.rlim_max = RLIM_INFINITY;

    return (0 == setrlimit(RLIMIT_CORE, &corelim));
}
kgbook
  • 388
  • 4
  • 16
  • why is that better? – Nathan Fellman Aug 26 '18 at 08:22
  • core file generated after crash, no need to `ulimit -c unlimited` in the command line environment, and then rerun the application. – kgbook Aug 27 '18 at 06:47
  • I don't want a core dump every time it crashes, only when a user contacts me as the developer to look at it. If it crashes 100 times, I don't need 100 core dumps to look at. – Nathan Fellman Aug 27 '18 at 18:23
  • In tha case, better to use `ulimit -c unlimited`. Also you can compile with marco definition, application will not include `enable_core_dump` symbol if not define that macro when release, and you will get a core dump replace with debug version. – kgbook Aug 28 '18 at 07:38
  • even if it's qualified by a macro, that still requires me to recompile if I want to generate a core dump, rather than simply executing a command in the shell before rerunning. – Nathan Fellman Aug 28 '18 at 08:57
  • It's so convenient for developer to obtain a core dump file and more verbose debug information. In release version, usually compiling with `-O2` and without `-g` , and debug information stripped or optimized, and I control all debug options and core file dump using that marco definition in CMakeLists.txt or Makefile. You can have your own choice. – kgbook Aug 28 '18 at 09:23
4

It's worth mentioning that if you have a systemd set up, then things are a little bit different. The set up typically would have the core files be piped, by means of core_pattern sysctl value, through systemd-coredump(8). The core file size rlimit would typically be configured as "unlimited" already.

It is then possible to retrieve the core dumps using coredumpctl(1).

The storage of core dumps, etc. is configured by coredump.conf(5). There are examples of how to get the core files in the coredumpctl man page, but in short, it would look like this:

Find the core file:

[vps@phoenix]~$ coredumpctl list test_me | tail -1
Sun 2019-01-20 11:17:33 CET   16163  1224  1224  11 present /home/vps/test_me

Get the core file:

[vps@phoenix]~$ coredumpctl -o test_me.core dump 16163
Pawel Veselov
  • 3,996
  • 7
  • 44
  • 62
4

This is typically sufficient:

ulimit -c unlimited

Note this will not persist between ssh sections! To add persistence:

echo '* soft core unlimited' >> /etc/security/limits.conf

Now, if you're using Ubuntu, "apport" is probably running. Here's how to check:

sudo systemctl status apport.service

If it is, you'll probably find core dumps in one of these places:

/var/lib/apport/coredump 
/var/crash

If you want to change the location of core dumps

Make sure that you have the permissions to create files and the directory exists in the directory you're sending a core dump to!

Here's an example. Note this will not persist across reboots:

sysctl -w kernel.core_pattern=/coredumps/core-%e-%s-%u-%g-%p-%t
mkdir /coredumps

Make sure that the process that's crashing has access to write to this. The easiest way would be an example like this:

chmod 777 /coredumps

Test that core dumps works

> crash.c
gcc -Wl,--defsym=main=0 crash.c
./a.out
==output== Segmentation fault (core dumped)

If it doesn't say "core dumped" above, something isn't working.

theicfire
  • 2,719
  • 2
  • 26
  • 29