# The Minimum Sample Size for a t-test: Explanation & Example

One common question students often ask is:

Is there a minimum sample size required to perform a t-test?

No. There is no minimum sample size required to perform a t-test.

In fact, the first t-test ever performed only used a sample size of four.

However, if the assumptions of a t-test are not met then the results could be unreliable.

Also, if the sample size is too small then the power of the test could be too low to detect meaningful differences in the data.

Let’s check out each of these potential issues in more detail.

### Understanding the Assumptions of t-tests

A one sample t-test is used to test whether or not the mean of a population is equal to some value.

This test makes the following assumptions:

• Independence: The observations in the sample should be independent.
• Random Sampling: The observations should be be collected using a random sampling method to maximize the chances that the sample is representative of the population of interest.
• Normality: The observations should be roughly normally distributed.

A two sample t-test is used to test whether there is a significant difference between two population means.

This test makes the following assumptions:

• Independence: The observations in each sample should be independent.
• Random Sampling: The observations in each sample should be be collected using a random sampling method.
• Normality: Each sample should be roughly normally distributed.
• Equal Variance: Each sample should have approximately the same variance.

When performing each type of t-test, if one or more of these assumptions are not met then the results of the test can become unreliable.

In this case, it’s best to use a non-parametric alternative test that doesn’t make these assumptions.

The non-parametric alternative to a one sample t-test is the Wilcoxon Signed Rank Test.

The non-parametric alternative to a two sample t-test is the Mann-Whitney U Test.

### Understanding the Power of t-tests

Statistical power refers to the probability that a test will detect some effect when there actually is one.

It can be shown that the lower the sample size used, the lower the statistical power of a given test. This is why researchers typically want larger sample sizes so that they have higher power and thus a greater probability of detecting true differences.

For example, suppose the true effect size between two populations is 0.5 – a “medium” effect size. We can use the following R code to calculate the power of a two sample t-test using various sample sizes:

```#sample size n=10
power.t.test(n=10, delta=.5, sd=1, sig.level=.05, type='two.sample')\$power

 0.1838375

#sample size n=30
power.t.test(n=30, delta=.5, sd=1, sig.level=.05, type='two.sample')\$power

 0.477841

#sample size n=50
power.t.test(n=50, delta=.5, sd=1, sig.level=.05, type='two.sample')\$power

 0.6968888
```

Here’s how to interpret the results:

• When each sample size is n = 10, the power is 0.184.
• When each sample size is n = 30, the power is 0.478.
• When each sample size is n = 50, the power is 0.697.

We can see that the power of the test increases as the sample size increases.

So, we don’t need a minimum sample size to perform a t-test but small sample sizes lead to lower statistical power and thus a reduced ability to detect a true difference in the data.

### Conclusion

Here’s a summary of what we’ve learned:

• There is no minimum sample size required to perform a t-test.
• If the assumptions of a t-test are not met, we should use a non-parametric alternative.
• If the sample size is too low, the power of the t-test will be low and the ability of the test to detect true differences in the data will be low.