In statistical significance testing the p-value is the probability of obtaining a test statistic at least as extreme as the one that was actually observed.
The p-value is a key concept in the approach of Ronald Fisher, where he uses it to measure the weight of the data against a specified hypothesis, and as a guideline to ignore data that does not reach a specified significance level. Fisher's approach does not involve any alternative hypothesis, which is instead the Neyman–Pearson approach. The p-value should not be confused with the Type I error rate (false positive rate) α in the Neyman–Pearson approach – though α is also called a "significance level" and is often 0.05, these terms have different meanings, these are incompatible approaches, and the numbers p and α cannot meaningfully be compared.