Introduction to Simple Linear Regression

Simple linear regression

What is Simple Linear Regression?

Simple linear regression is a statistical method you can use to understand the relationship between two variables, x and y.

One variable, x, is known as the predictor variable.

The other variable, y, is known as the response variable.

For example, suppose we have the following dataset with the weight and height of seven individuals:

Simple linear regression

Let weight be the predictor variable and let height be the response variable.

If we graph these two variables using a scatterplot, with weight on the x-axis and height on the y-axis, here’s what it would look like:

Linear regression scatterplot

Suppose we’re interested in understanding the relationship between weight and height. From the scatterplot we can clearly see that as weight increases, height tends to increase as well, but to actually quantify this relationship between weight and height, we need to use linear regression.

Using linear regression, we can find the line that best “fits” our data. This line is known as the least squares regression line and it can be used to help us understand the relationships between weight and height. Usually you would use software like Microsoft Excel, SPSS, or a graphing calculator to actually find the equation for this line.

The formula for the line of best fit is written as:

ŷ = b0 + b1x

where ŷ is the predicted value of the response variable, b0 is the y-intercept, b1 is the regression coefficient, and x is the value of the predictor variable.

Finding the “Line of Best Fit”

For this example, we can simply plug our data into the Statology Linear Regression Calculator and hit CALCULATE:

Introduction to simple linear regression

The calculator automatically finds the least squares regression line:

ŷ = 32.7830 + 0.2001x

If we zoom out on our scatterplot from earlier and added this line to the chart, here’s what it would look like:

Notice how our data points are scattered closely around this line. That’s because this least squares regression lines is the best fitting line for our data out of all the possible lines we could draw.

How to Interpret a Least Squares Regression Line

Here is how to interpret this least squares regression line: ŷ = 32.7830 + 0.2001x

b0 = 32.7830. This means when the predictor variable weight is zero pounds, the predicted height is 32.7830 inches. Sometimes the value for b0 can be useful to know, but in this specific example it doesn’t actually make sense to interpret b0 since a person can’t weight zero pounds.

b= 0.2001. This means that a one unit increase in x is associated with a 0.2001 unit increase in y. In this case, a one pound increase in weight is associated with a 0.2001 inch increase in height.

How to Use the Least Squares Regression Line

Using this least squares regression line, we can answer questions like:

For a person who weighs 170 pounds, how tall would we expect them to be?

To answer this, we can simply plug in 170 into our regression line for x and solve for y:

ŷ = 32.7830 + 0.2001(170) = 66.8 inches

For a person who weighs 150 pounds, how tall would we expect them to be?

To answer this, we can plug in 150 into our regression line for x and solve for y:

ŷ = 32.7830 + 0.2001(150) = 62.798 inches

Caution: When using a regression equation to answer questions like these, make sure you only use values for the predictor variable that are within the range of the predictor variable in the original dataset we used to generate the least squares regression line. For example, the weights in our dataset ranged from 140 lbs to 212 lbs, so it only makes sense to answer questions about predicted height when the weight is between 140 lbs and 212 lbs.

The Coefficient of Determination

One way to measure how well the least squares regression line “fits” the data is using the coefficient of determination, denoted as R2.

The coefficient of determination is the proportion of the variance in the response variable that can be explained by the predictor variable.

The coefficient of determination can range from 0 to 1. A value of 0 indicates that the response variable cannot be explained by the predictor variable at all. A value of 1 indicates that the response variable can be perfectly explained without error by the predictor variable.

An Rbetween 0 and 1 indicates just how well the response variable can be explained by the predictor variable. For example, an Rof 0.2 indicates that 20% of the variance in the response variable can be explained by the predictor variable; an Rof 0.77 indicates that 77% of the variance in the response variable can be explained by the predictor variable.

Notice in our output from earlier we got an Rof 0.9311, which indicates that 93.11% of the variability in height can be explained by the predictor variable of weight:

This tells us that weight is a very good predictor of height.

Conditions for Linear Regression

We just saw how to conduct a simple linear regression analysis as well as how to interpret the output of a regression line. However, before we can conduct a linear regression analysis, we first need to make sure the following conditions are met:

  • The response variable has a linear relationship with the predictor variable. One simple way to check this is to make a scatterplot of the two variables like we did at the beginning of this post. As long as it appears there is a linear relationship between the two variables (i.e. the dots on the scatterplot could fit reasonably around a straight line).
  • For each value of X, Y has the same standard deviation.
  • For any given value of X, the Y values are independent and roughly normally distributed.

As long as these conditions are met, it’s safe to proceed with simple linear regression.

Further Reading: 

How to Interpret Regression Coefficients
How to Read and Interpret a Regression Table

Leave a Reply

Your email address will not be published. Required fields are marked *