19

Sometimes when I run my code, a core dump file is generated when I terminate the program by Ctrl+\. The file name is of the form core.*. The program is not terminating abruptly, and there is no segmentation fault. I believe it is SIGQUIT and not SIGABRT or SIGSEGV. If I try Ctrl+C, or Ctrl+Z, then it is not generated.

Can anyone tell why it is generated only when Ctrl+\ is pressed? How can I avoid this core dump file from being generated? Is there any use for the core dumped file?

sawa
  • 165,429
  • 45
  • 277
  • 381
Chaithra
  • 1,130
  • 3
  • 14
  • 22
  • When you say "run my code", are you talking about when you run make? Or when you run the compiled binary? – harto Apr 22 '09 at 06:07
  • 1
    Just a titbit: at least one answer explains why SIGINT does not create a core but I didn't see at first glance any talk about SIGTSTP (that's what ctrl + z does by default) - it suspends the process. The shell builtin *`jobs`* will show you suspended processes. See also *`help fg`* (if using bash at least that will work). You can also generate this by the command: *`kill -SIGTSTP `*. Note also that you can redefine what control combinations send by default (that is: you could define SIGTSTP to be ctrl + j if you wanted to). *`stty -a`* will show you the configuration - and other info. – Pryftan Sep 09 '19 at 19:21

7 Answers7

26

A process dumps core when it is terminated by the operating system due to a fault in the program. The most typical reason this occurs is because the program accessed an invalid pointer value. Given that you have a sporadic dump, it's likely that you are using an uninitialized pointer.

Can you post the code that is causing the fault? Other than vague generalizations it's hard to guess what's wrong without actually seeing code.

As for what a core dump actually is, check out this Wikipedia article:

JaredPar
  • 733,204
  • 149
  • 1,241
  • 1,454
  • 8
    In Linux, Ctrl + \ causes a core dump, even if the program has no faults and is running fine at the time of termination. – ely Jul 12 '13 at 20:41
  • To be more specific it's when the process goes out of its memory space (read/write). And another reason (though I am not discounting the suggestion that it's likely a pointer related bug) seemingly random crashes can happen: a process overwrites memory but does not go out of its memory space. Then later on in the process long after the error occurs you get a trashed stack. It can cause all sorts of problems and can be a nightmare to track down - though it's something revision control can help figure out as well as experience. – Pryftan Sep 09 '19 at 18:57
  • @ely If configured to be SIGQUIT yes (which by default is). But it's because the default handler for SIGQUIT is to dump core. Iirc it does not evoke UB to ignore SIGQUIT which means that you can prevent that. – Pryftan Sep 09 '19 at 19:01
13

As said by others before the core dump is the result of a fault in the program.

You can configure if a core dump is to be generated with the ulimit command. Entering

ulimit -c 0

disables core file generation in the active shell.

If the program that generated the core was built with symbol information you can do a post mortem debugging session like this:

gdb <pathto/executable> --core <corefilename>
lothar
  • 19,853
  • 5
  • 45
  • 59
9

ctrl + \ sends signal SIGQUIT to the process. According to POSIX.1 standard, the default action for this signal is to generate a core.

SIGILL, SIGABRT, SIGFPE, SIGSEGV are other cases when system will generate a core.

Please refer "man 7 signal" on your system for more details.

Onkar
  • 1,101
  • 8
  • 7
  • True. It should be maybe noted that in addition to SIGILL as you have there is also SIGKILL. SIGKILL does not by default generate a coredump but just like SIGSTOP it can't be caught, ignored or blocked. I point this out because it's easy to read SIGILL as SIGKILL - especially if you don't know that both exist. – Pryftan Sep 09 '19 at 19:05
  • And on signal handlers note UB: *According to POSIX, the behavior of a process is undefined after it ignores a SIGFPE, SIGILL, or SIGSEGV signal that was not generated by kill(2) or raise(3). Integer division by zero has undefined result. On some architectures it will generate a SIGFPE signal. (Also dividing the most negative integer by -1 may generate SIGFPE.) Ignoring this signal might lead to an endless loop.* – Pryftan Sep 09 '19 at 19:06
8

Core dumps are generated when the process receives certain signals, such as SIGSEGV, which the kernels sends it when it accesses memory outside its address space. Typically that happens because of errors in how pointers are used. That means there's a bug in the program.

The core dump is useful for finding the bug. It is an image of the the process's memory at the time of the problem, so a debugger such as gdb can be used to see what the program was doing then. The debugger can even access (sometimes) the values of variables in the program.

You can prevent core dumps from happening using the ulimit command.

6

It's a tool to aid in debugging an application that's behaving badly. It's large because it contains the contents of all the applications physical memory at the time it died as well as the register states and stacks of all its threads.

They get generated when the kernel kills an application for doing something evil, like generating a segmentation violation or a bus error.

Jason Coco
  • 77,985
  • 20
  • 184
  • 180
  • Hum ... It only contains a dump of the processus memory, but still, it can be quite a lot of memory. – Ben Apr 22 '09 at 07:37
  • 'Evil'? That would imply that making a mistake is 'evil'. Maybe it's intentional exaggeration but it's a huge exaggeration at that. And the sizes vary on the size of the process (file). And technically 'it' doesn't generate a segfault per se but rather it triggers one. – Pryftan Sep 09 '19 at 19:10
3

You can avoid creating a core dump file by writing code that doesn't crash :)

Seriously, core dumps are useful because you can see the state of the program when it crashed, for "post mortem" debugging. You can open them in gdb and inspect the state of your program (especially if it was built with debugging).

Core dumps usually get made if the program has a SIGSEGV (usually caused by invalid pointer dereferencing), SIGABRT (which would happen if you called abort(), or in C++ by the default terminate() handler for exceptions in destructors etc) or some other fault. You can also trigger them explicitly with the debugger or programmatically.

If you've fixed all the bugs and it's perfect, you can delete them. Also, if you've changed your program in any way (and recompiled it) then they will become useless as the debug info now won't match what's in the core dump, so you can delete them then too.

MarkR
  • 62,604
  • 14
  • 116
  • 151
  • Actually even if you don't recompile this happens because the line information of the source file no longer matches. So if you're debugging by use of a coredump if you change the source it has potential to be very wrong. Even a single line can complicate things. – Pryftan Sep 09 '19 at 19:12
1

The point of Ctrl + \ is to generate a core dump. That's what SIGQUIT does. If you don't want it to be generated, use Ctrl + C (SIGINT) instead. If the program in question is not responding to SIGINT but you need to kill it from the terminal, either you or the developer is doing something wrong.

Programs designed not to be killed from the terminal with Ctrl + C should still respond gracefully to SIGTERM, which can be triggered in another terminal via kill -TERM .... If all else fails, SIGKILL will force an immediate termination.

Zenexer
  • 18,788
  • 9
  • 71
  • 77