A **MANOVA** (multivariate analysis of variance) is used to analyze how one or more factor variables affects multiple response variables.

For example, we might use a MANOVA to analyze how level of education (High school degree, Associate’s degree, Bachelor’s degree, Master’s degree) affects both annual income and total student loan debt.

**Related:** The Differences Between ANOVA, ANCOVA, MANOVA, and MANCOVA

Whenever we perform a MANOVA, we should check to make sure that the following assumptions are met:

**1. Multivariate Normality** – Response variables are multivariate normally distributed within each group of the factor variable(s).

**2. Independence** – Each observation is randomly and independently sampled from the population.

**3. Equal Variance** – The population covariance matrices of each group are equal.

**4. No Multivariate Outliers** – There are no extreme multivariate outliers.

In this post, we provide an explanation for each assumption along with how to determine if the assumption is met.

**Assumption 1: Multivariate Normality**

A MANOVA assumes that the response variables are multivariate normally distributed within each group of the factor variable.

If there are at least 20 observations for each combination of factor * response variable, then we can assume that the multivariate normality assumption is met.

If there are less than 20 observations for each combination of factor * response variable, we can create a scatterplot matrix to visualize the residuals and visually check whether this assumption is met.

Fortunately, it’s well-known that MANOVA is robust against departures from multivariate normality so small to moderate departures typically don’t cause any problems.

**Assumption 2: Independence**

A MANOVA assumes that each observation is randomly and independently sampled from the population.

As long as a probability sampling method (every member in a population has an equal probability of being selected to be in the sample) is used to collect the data, we can assume that each observation has been randomly and independently sampled.

Examples of probability sampling methods include:

- Simple random sampling
- Stratified random sampling
- Cluster random sampling
- Systematic random sampling

**Assumption 3: Equal Variance**

A MANOVA assumes that the population covariance matrices of each group are equal.

The most common way to check this assumption is to use Box’s M test. This test is known to be quite strict, so we usually use a significance level of .001 to determine whether or not the population covariance matrices are equal.

If the p-value for Box’s M test is greater than .001, we can assume that this assumption is met.

Fortunately, even if the p-value for the test is less than .001 a MANOVA tends to be robust against departures from this assumption.

In order for non-equal covariance matrices to be a problem, the differences between the covariance matrices needs to be quite extreme.

**Assumption 4: No Multivariate Outliers**

A MANOVA assumes that there are no extreme multivariate outliers present in the data that could significantly influence the results.

The most common way to check this assumption is to calculate the Mahalanobis distance for each observation, which represents the distance between two points in a multivariate space.

If the corresponding p-value for a Mahalanobis distance of any observation is less than .001, we typically declare that observation to be an extreme outlier.

Refer to the following tutorials to see how to calculate Mahalanobis distance in various statistical software:

- How to Calculate Mahalanobis Distance in R
- How to Calculate Mahalanobis Distance in SPSS
- How to Calculate Mahalanobis Distance in Python

**Additional Resources**

The following tutorials explain how to perform a MANOVA in various statistical software:

How to Perform a MANOVA in R

How to Perform a MANOVA in SPSS

How to Perform a MANOVA in Stata