How to Perform a Two-Way ANOVA in SPSS


A two-way ANOVA is used to determine whether or not there is a statistically significant difference between the means of three or more independent groups that have been split on two factors.

The purpose of a two-way ANOVA is to determine how two factors impact a response variable, and to determine whether or not there is an interaction between the two factors on the response variable.

This tutorial explains how to conduct a two-way ANOVA in SPSS.

Example: Two-Way ANOVA in SPSS

A botanist wants to know whether or not plant growth is influenced by sunlight exposure and watering frequency. She plants 30 seeds and lets them grow for two months under different conditions for sunlight exposure and watering frequency. After two months, she records the height of each plant, in inches.

The results are shown below:

Use the following steps to perform a two-way ANOVA to determine if watering frequency and sunlight exposure have a significant effect on plant growth, and to determine if there is any interaction effect between watering frequency and sunlight exposure.

Step 1: Perform the two-way ANOVA.

Click the Analyze tab, then General Linear Model, then Univariate:

Drag the response variable height into the box labelled Dependent variable. Drag the two factor variables water and sun into the box labelled Fixed Factor:

Next, click the Plots button. Drag water into the box labelled Horizontal axis and sun into the box labelled Separate lines. Then click Add. The words water*sun will appear in the box labelled Plots. Then click Continue.

Next, click the Post Hoc button. In the new window that pops up, drag the variable sun into the box labelled Post Hoc Tests for. Then check the box next to Tukey. Then click Continue.

Next, click the EM Means button. Drag the following variables into the box labelled Display Means for. Then click Continue.

Estimated marginal means in SPSS

Lastly, click OK.

Step 2: Interpret the results.

Once you click OK, the results of the two-way ANOVA will appear. Here is how to interpret the results:

Tests of Between-Subjects Effects

The first table displays the p-values for the factors water and sun, along with the interaction effect water*sun:

We can see the following p-values for each of the factors in the table:

  • water: p-value = .000
  • sun: p-value = .000
  • water*sun: p-value = .201

Since the p-value for water and sun are both less than .05, this tells us that both factors have a statistically significant effect on plant height.

And since the p-value for the interaction effect (.201) is not less than .05, this tells us that there is no significant interaction effect between sunlight exposure and watering frequency.

Estimated Marginal Means

The first table displays the means of the observations for each factor:

For example:

  • The mean height of plants that were watered daily was 5.893 inches.
  • The mean height of plants that received high sunlight exposure was 6.62 inches.
  • The mean height of plants that were watered daily and received high sunlight exposure was 6.32 inches.

And so on.

Post Hoc Tests

This table displays the p-values for the Tukey post-hoc comparisons between the three different levels of sunlight exposure.

Tukey post hoc tests for two-way ANOVA in SPSS

From the table we can see the p-values for the following comparisons:

  • high vs. low: | p-value = 0.000
  • high vs. medium | p-value = 0.000
  • low vs. medium | p-value = 0.447

This tells us that there is a statistically significant difference between high and low sunlight exposure, along with high and medium sunlight exposure, but there is no significant difference between low and medium sunlight exposure.

Step 3: Report the results.

Lastly, we can report the results of the two-way ANOVA. Here is an example of how to do so:

A two-way ANOVA was performed to determine if watering frequency (daily vs. weekly) and sunlight exposure (low, medium, high) had a significant effect on plant growth. A total of 30 plants were used in the study.

 

A two-way ANOVA revealed that watering frequency (p < .000) and sunlight exposure (p < .000) both a statistically significant effect on plant growth.

 

Plants that were watered daily experienced significantly higher growth than plants that were watered weekly.

 

Further, Tukey’s test for multiple comparisons found that plants that received high sunlight exposure had significantly higher growth than plants that received medium and low sunlight exposure. However, there was no significant difference between plants that received medium and low sunlight exposure.

 

There was also no statistically significant interaction effect between watering frequency and sunlight exposure.

3 Replies to “How to Perform a Two-Way ANOVA in SPSS”

  1. This piece is has been most useful to me provides a step by step approach to statistical analysis using SPSS.

  2. Hello, I currently need to conduct a two-factor analysis of variance. The two factors are leg dominance (dominant leg and non-dominant leg) and sports level (U17 team, adult second team, adult first team, elite adult team), check this Whether these two factors will have an impact on ball speed. But I found that 2 of the eight groups (4X2) did not meet the normal distribution. Which method should be used to determine whether there are differences between the groups? Thank you so much.

    1. Hi yecheng zhang…When you have data that does not meet the assumption of normality for some groups in a two-factor analysis of variance (ANOVA), you have a few options to address this issue. Here are some methods to consider:

      ### 1. Non-parametric Alternatives
      If the assumption of normality is violated, you can use non-parametric methods that do not assume normality. For a two-factor design, a suitable non-parametric alternative is the **Aligned Rank Transform (ART)**.

      #### Aligned Rank Transform (ART)
      ART allows you to perform non-parametric factorial ANOVA by aligning and ranking the data. This method is useful when you have interactions between factors.

      – **Steps to use ART:**
      1. Align the data: Adjust the data by subtracting the effects of other factors.
      2. Rank the aligned data.
      3. Perform ANOVA on the ranked data.

      Software implementations:
      – In R: Use the `ARTool` package.
      – In Python: Use the `ART` function from the `statsmodels` library.

      Example in R:
      “`R
      # Install ARTool package if not already installed
      install.packages(“ARTool”)

      # Load the ARTool package
      library(ARTool)

      # Assuming your data is in a dataframe called `data`
      # with columns: BallSpeed, LegDominance, SportsLevel

      # Fit the model using ART
      model <- art(BallSpeed ~ LegDominance * SportsLevel, data=data) # Perform ANOVA anova(model) ``` ### 2. Transformations Another approach is to transform your data to meet the normality assumption. Common transformations include log, square root, or Box-Cox transformations. However, this approach may not always work and can complicate interpretation. ### 3. Robust ANOVA Robust ANOVA methods are designed to be less sensitive to violations of assumptions. These methods include: - **Welch’s ANOVA**: Used when homogeneity of variances is violated. - **Bootstrapping**: Resamples the data to create a distribution of the test statistic. #### Welch’s ANOVA Welch’s ANOVA can be used if variances are unequal. However, it is typically used for one-way ANOVA, so it may not be directly applicable to a two-factor design without modifications. #### Bootstrapping Bootstrapping involves resampling your data to perform hypothesis testing. Example in R: ```R # Assuming your data is in a dataframe called `data` # with columns: BallSpeed, LegDominance, SportsLevel # Load necessary library library(boot) # Define a function to compute the ANOVA statistic anova_stat <- function(data, indices) { d <- data[indices, ] # Resample data model <- lm(BallSpeed ~ LegDominance * SportsLevel, data=d) anova(model)["LegDominance:SportsLevel", "F value"] } # Perform bootstrapping results <- boot(data=data, statistic=anova_stat, R=1000) # Get the bootstrapped confidence intervals boot.ci(results, type="bca") ``` ### 4. Mixed-Effects Models If your data is not normally distributed, you can use mixed-effects models that are more flexible with respect to distributional assumptions. These models can handle both fixed and random effects and can be implemented in R using the `lme4` package. Example in R: ```R # Load necessary library library(lme4) # Fit the mixed-effects model model <- lmer(BallSpeed ~ LegDominance * SportsLevel + (1|ParticipantID), data=data) # Summary of the model summary(model) # Perform ANOVA anova(model) ``` ### Summary Given that two of your groups do not meet the normality assumption, the Aligned Rank Transform (ART) method is recommended for its ability to handle non-normal data in factorial designs. If ART is not suitable or available, consider robust methods or transformations as alternatives. Mixed-effects models can also be considered for their flexibility and ability to handle complex data structures. By applying these methods, you can determine whether there are significant differences between the groups while accounting for the violation of normality. If you need further assistance with implementation or interpretation, feel free to ask!

Leave a Reply

Your email address will not be published. Required fields are marked *