# How to Perform a Durbin-Watson Test in SPSS

One of the assumptions in linear regression is that there is no correlation between the residuals, e.g. the residuals are independent.

One way to determine if this assumption is met is to perform a Durbin-Watson test, which is used to detect the presence of autocorrelation in the residuals of a regression model.

This test uses the following hypotheses:

• H0: There is no correlation among the residuals.
• HA: The residuals are autocorrelated.

The test statistic for this test is approximately equal to 2*(1-r) where r is the sample autocorrelation of the residuals.

Thus, the test statistic will always be between 0 and 4 where:

• A test statistic of indicates no serial correlation.
• The closer the test statistics is to 0, the more evidence of positive serial correlation.
• The closer the test statistics is to 4, the more evidence of negative serial correlation.

As a rule of thumb, test statistic values between the range of 1.5 and 2.5 are considered normal.

However, values outside of this range could indicate that autocorrelation is a problem.

The following example shows how to perform a Durbin-Watson test for a regression model in SPSS.

## Example: How to Perform a Durbin-Watson Test in SPSS

Suppose we have the following dataset in SPSS that contains information about various basketball players:

Suppose that we would like to fit a multiple linear regression model using points, assists and rebounds as predictor variables and rating as the response variable.

To do so, click the Analyze tab, then click Regression, then click Linear:

In the new window that appears, drag rating to the Dependent panel and then drag points, assists and rebounds to the Independent panel:

Then click the Statistics button.

In the new window that appears, check the box next to Durbin-Watson under Residuals:

Then click Continue. Then click OK.

The following results will be shown:

The test statistic for the Durbin-Watson test will be shown in the Model Summary table.

We can see that the test statistic turns out to be 2.392.

Since this value is within the range of 1.5 to 2.5, we would not consider autocorrelation to be a problem in this regression model.

## How to Handle Autocorrelation

If you reject the null hypothesis and conclude that autocorrelation is present in the residuals, then you may consider the following options to correct this problem if you deem it to be serious enough:

• For positive serial correlation, consider adding lags of the dependent and/or independent variable to the model.
• For negative serial correlation, check to make sure that none of your variables are overdifferenced.
• For seasonal correlation, consider adding seasonal dummy variables to the model.