When you run a statistical test, whether it’s a chi-square test, a test for a population mean, a test for a population proportion, a linear regression, or any other test, you’re often interested in the resulting p-value from that test.
A p-value simply tells you the strength of evidence in support of a null hypothesis.
If the p-value is less than the significance level, we reject the null hypothesis.
So, when you get a p-value of 0.000, you should compare it to the significance level. Common significance levels include 0.1, 0.05, and 0.01.
Since 0.000 is lower than all of these significance levels, we would reject the null hypothesis in each case.
Let’s walk through an example to clear things up.
Example: Getting a P-Value of 0.000
A factory claims that they produce tires that each weigh 200 pounds.
An auditor comes in and tests the null hypothesis that the mean weight of a tire is 200 pounds against the alternative hypothesis that the mean weight of a tire is not 200 pounds, using a 0.05 level of significance.
The null hypothesis (H0): μ = 200
The alternative hypothesis: (Ha): μ ≠ 200
Upon conducting a hypothesis test for a mean, the auditor gets a p-value of 0.000.
Since the p-value of 0.000 is less than the significance level of 0.05, the auditor rejects the null hypothesis.
Thus, he concludes that there is sufficient evidence to say that the true average weight of a tire is not 200 pounds.
What a P-Value of 0.000 Means
Whether you use Microsoft Excel, a TI-84 calculator, SPSS, or some other software to compute the p-value of a statistical test, often times the p-value is not exactly 0.000, but rather something extremely small like 0.000000000023.
Most software only display three decimal places, though, which is why the p-value shows up as 0.000.
If you conduct a statistical test using a significance level of 0.1, 0.05, or 0.01 (or any significance level greater than 0.000) and get a p-value of 0.000, then reject the null hypothesis.