5

consider the following code in C

int n;
scanf("%d",n)

it gives the error Segmentation fault core dumped in GCC compiler in Linux Mandriva

but the following code

int *p=NULL;
*P=8;

gives only segmentation fault why is that so..?

millimoose
  • 39,073
  • 9
  • 82
  • 134
Rupak
  • 51
  • 1
  • 3
  • 3
    Operating system choice, likely. http://stackoverflow.com/questions/775872/why-core-dump-file-is-generated – Carl Norum Oct 24 '11 at 20:17
  • 1
    Well, neither program will compile, actually. You're missing a semicolon in the first example, and C is case-sensitive (p vs. P) in the second. – derobert Oct 24 '11 at 20:22

3 Answers3

5

A core dump is a file containing a dump of the state and memory of a program at the time it crashed. Since core dumps can take non-trivial amounts of disk space, there is a configurable limit on how large they can be. You can see it with ulimit -c.

Now, when you get a segmentation fault, the default action is to terminate the process and dump core. Your shell tells what has happened, if a process has terminated with a segmentation fault signal it will print Segmentation fault, and if that process has additionally dumped core (when the ulimit setting and the permissions on the directory where the core dump is to be generated allow it), it will tell you so.

ninjalj
  • 42,493
  • 9
  • 106
  • 148
1

Assuming you're running both of these on the same system, with the same ulimit -c settings (which would be my first guess as to the difference you're seeing), then its possible the optimizer is "noticing" the clearly undefined behavior in the second example, and generating its own exit. You could check with objdump -x.

derobert
  • 49,731
  • 15
  • 94
  • 124
0

In the first case 'n' could have any value, you might own this memory (or not), it might be writeable (or not) but it probably exists. There is no reason that n is necessarily zero.

Writing to NULL is definetly naughty and something the OS is going to notice!

Martin Beckett
  • 94,801
  • 28
  • 188
  • 263
  • Using an unitialized variable like that defintely isn't defined behavior. – derobert Oct 24 '11 at 20:27
  • The point is that the runtime has an easier job of noticing that you are writing to address 0 than it has that you are writing to some other random value. – Martin Beckett Oct 24 '11 at 20:33
  • But it obviously did notice, as it segfaulted. – derobert Oct 24 '11 at 20:39
  • That was just good luck! Bad would be that it worked in testing but not with the customer. It is of course undefined, but hard to detect – Martin Beckett Oct 24 '11 at 20:52
  • Actually both of those could result in hard-to-find bugs when you consider the kinds of optimizations compilers have been picking up lately. (`int c = *foo; if (!c) abort();` may very well consider that if to be dead code, and eliminate it. Even if the *foo came from inlining a chain of functions.) But I think the question is why one of them dumped core and the other didn't. – derobert Oct 24 '11 at 21:13