A Breusch-Pagan Test is used to determine if heteroscedasticity is present in a regression analysis.

This tutorial explains how to perform a Breusch-Pagan Test in R.

**Example: Breusch-Pagan Test in R**

In this example we will fit a regression model using the built-in R dataset **mtcars **and then perform a Breusch-Pagan Test using the **bptest **function from the **lmtest **library to determine if heteroscedasticity is present.

**Step 1: Fit a regression model.**

First, we will fit a regression model using **mpg **as the response variable and **disp ** and **hp **as the two explanatory variables.

#load the dataset data(mtcars) #fit a regression model model <- lm(mpg~disp+hp, data=mtcars) #view model summary summary(model) Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 30.735904 1.331566 23.083 < 2e-16 *** disp -0.030346 0.007405 -4.098 0.000306 *** hp -0.024840 0.013385 -1.856 0.073679 . --- Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 Residual standard error: 3.127 on 29 degrees of freedom Multiple R-squared: 0.7482, Adjusted R-squared: 0.7309 F-statistic: 43.09 on 2 and 29 DF, p-value: 2.062e-09

**Step 2: Perform a Breusch-Pagan Test.**

Next, we will perform a Breusch-Pagan Test to determine if heteroscedasticity is present.

#load lmtest library library(lmtest) #perform Breusch-Pagan Test bptest(model) studentized Breusch-Pagan test data: model BP = 4.0861, df = 2, p-value = 0.1296

The test statistic is **4.0861** and the corresponding p-value is **0.1296**. Since the p-value is not less than 0.05, we fail to reject the null hypothesis. We do not have sufficient evidence to say that heteroscedasticity is present in the regression model.

**What To Do Next**

If you fail to reject the null hypothesis of the Breusch-Pagan test, then heteroscedasticity is not present and you can proceed to interpret the output of the original regression.

However, if you reject the null hypothesis, this means heteroscedasticity is present in the data. In this case, the standard errors that are shown in the output table of the regression may be unreliable.

There are a couple common ways that you can fix this issue, including:

**1. Transform the response variable. **You can try performing a transformation on the response variable. For example, you could use the log of the response variable instead of the original response variable. Typically taking the log of the response variable is an effective way of making heteroscedasticity go away. Another common transformation is to use the square root of the response variable.

**2. Use weighted regression. **This type of regression assigns a weight to each data point based on the variance of its fitted value. Essentially, this gives small weights to data points that have higher variances, which shrinks their squared residuals. When the proper weights are used, this can eliminate the problem of heteroscedasticity.