What is a core dump file in linux? What all information does it provide?
2 Answers
It's basically the process address space in use (from the mm_struct
structure which contains all the virtual memory areas), and any other supporting information*a, at the time it crashed.
For example, let's say you try to dereference a NULL pointer and receive a SEGV signal, causing you to exit. As part of that process, the operating system tries to write your information to a file for later post-mortem analysis.
You can load the core file into a debugger along with the executable file (for symbols and other debugging information, for example) and poke around to try and discover what caused the problem.
*a: in kernel version 2.6.38, fs/exec.c/do_coredump()
is the one responsible for core dumps and you can see that it's passed the signal number, exit code and registers. It in turn passes the signal number and registers to a binary-format-specific (ELF, a.out, etc) dumper.
The ELF dumper is fs/binfmt_elf.c/elf_core_dump()
and you can see that it outputs non-memory-based information, like thread details, in fs/binfmt_elf.c/fill_note_info()
, then returns to output the process space.

- 854,327
- 234
- 1,573
- 1,953
-
-
1@Jay: The variables themselves will be there since they're in the address space. Information on them (such as mapping names to locations) is not. This is something retrieved from the executable when loading into the debugger (assuming the executable was compiled with debug info). – paxdiablo Mar 16 '11 at 06:07
-
1@Jay The values of local variables will be there as well, and how to access them is made known to the debugger through the symbol table in the executable, if the executable has a debugging symbol table, such as created by gcc -g. (For other compilers, check their documentation.) – Jim Balter Mar 16 '11 at 07:08
-
to me an _address space_ is a set of addresses. I think it would be more accurate to say that a core file is a dump of the _process memory_ at the time it crashed. – Ben Mar 16 '11 at 13:07
-
If a program terminates abnormally, the status of the program at the point of abnormal termination should be recorded for further analysis. and this status is recorded in core dump file.
In a multiuser and multitasking environment, accessing resources which doesn't belong to you is not acceptable. If a process-A tries to access system resources which belongs to process-B, Its a violation. At this point of time, the operating system kills the process and stores the process status into a file. And this file is called core dump file. There are many reasons for core dump. I just explained one of the possibilities for core dump. Usually it will be because of SIGSEGV (segmentation fault) and SIGBUS(Bus error).
The core dump file contains details of where the abnormal termination happened, process stack, symbol table etc.
There are many tools available to debug the coredumps. gdb dbx objdump mdb
Compiler options are present to make the debugging process easier. while compilation giving these flags (-g usually) will result in leaving extra information in symbol table of object files, which helps debuggers (gdb/dbx) to easily access the symbols(symbolic references).

- 346
- 3
- 10
questionsanswers, then instead of looking things up at google, one would look them up at SO. *Asking*, rather than looking up, is a much more time-consuming and much less effective way of getting information. 2) SO is *not* the ultimate source of all programming information, will never be that source, and the whole idea of a _single_ source for such information is stupid. – Jim Balter Mar 16 '11 at 07:01