A one-way ANOVA is used to determine whether or not there is a significant difference between the means of three or more independent groups.

One of the assumptions of a one-way ANOVA is that the variances of the populations that the samples come from are equal.

One of the most common ways to test for this is by using a **Brown-Forsythe test**, which is a statistical test that uses the following hypotheses:

**H**The variances among the populations are equal._{0}:**H**: The variances among the populations are not equal._{A}

If the p-value of the test is less than some significance level (e.g. α = .05) then we reject the null hypothesis and conclude that the variances are not equal among the different populations.

This tutorial provides a step-by-step example of how to perform a Brown-Forsythe test in Python.

**Step 1: Enter the Data**

Suppose researchers want to know if three different fertilizers lead to different levels of plant growth.

They randomly select 30 different plants and split them into three groups of 10, applying a different fertilizer to each group. At the end of one month they measure the height of each plant.

The following arrays show the height of plants in each of the three groups:

group1 = [7, 14, 14, 13, 12, 9, 6, 14, 12, 8] group2 = [15, 17, 13, 15, 15, 13, 9, 12, 10, 8] group3 = [6, 8, 8, 9, 5, 14, 13, 8, 10, 9]

**Step 2: Summarize the Data**

Before we perform a Brown-Forsythe test, we can calculate the variance of the plant measurements in each group:

#import numpy import numpy as np #calculate variance of plant measurements in each group print(np.var(group1), np.var(group2), np.var(group3)) 8.69 7.81 7.0

We can see that the variances between the groups differ, but to determine if these differences are statistically significant we can perform the Brown-Forsythe test.

**Step 3: Perform the Brown-Forsythe Test**

To perform a Brown-Forsythe test in Python, we can use the scipy.stats.levene() function and specify the center to be **median**:

import scipy.stats as stats stats.levene(group1, group2, group3, center='median') LeveneResult(statistic=0.17981072555205047, pvalue=0.8364205218185946)

From the output we can observe the following:

- Test statistic:
**0.1798** - p-value:
**0.8364**

The p-value of the test turns out to be greater than .05, so we fail to reject the null hypothesis of the test.

The differences in the variances between the groups is not statistically significant.

**Next Steps**

If we fail to reject the null hypothesis of the Brown-Forsythe Test, then we can proceed to perform a one-way ANOVA on the data.

However, if we reject the null hypothesis then this indicates that the assumption of equal variances is violated. In this case, we have two options:

**1. Proceed with a One-Way ANOVA anyway.**

It turns out that a one-way ANOVA is actually robust to unequal variances as long as the largest variance is no larger than 4 times the smallest variance.

In step 2 from the example above, we found that the smallest variance was 7.0 and the largest variance was 8.69. Thus, the ratio of the largest to smallest variance is 8.69 / 7.0 = **1.24**.

Since this value is less than 4, we could simply proceed with the one-way ANOVA even if the Brown-Forsythe test indicated that the variances were not equal.

**2. Perform a Kruskal-Wallis Test**

If the ratio of the largest variance to the smallest variance is greater than 4, we may instead choose to perform a Kruskal-Wallis test. This is considered the non-parametric equivalent to the one-way ANOVA.

You can find a step-by-step example of a Kruskal-Wallis test in Python here.