A **z-score **tells you how many standard deviations away an individual data value falls from the mean. It is calculated as:

**z-score = (x – μ) / σ**

where:

**x:**individual data value**μ:**population mean**σ:**population standard deviation

A z-score for an individual value can be interpreted as follows:

**Positive z-score:**The individual value is greater than the mean.**Negative z-score:**The individual value is less than the mean.**A z-score of 0:**The individual value is equal to the mean.

Z-scores are particularly useful for when we want to compare the relative standing of two data points from two different distributions. To illustrate this, consider the following example.

**Example: Comparing Z-Scores**

The scores on a certain college exam are normally distributed with mean μ = 80 and standard deviation σ = 4. Duane scores an 84 on this exam.

The scores on another college exam are normally distributed with mean μ = 85 and standard deviation σ = 8. Debbie scores an 90 on this exam.

**Relative to their own exam score distributions, who scored higher on their exam?**

To answer this question, we can calculate the z-score of each person’s exam score:

Duane’s z-score = (x – μ) / σ = (84 – 80) / 4 = 4 / 4 = **1**

Debbie’s z-score = (x – μ) / σ = (90 – 85) / 8 = 5 / 8 = **0.625**

Although Debbie scored higher, Duane’s score is actually higher relative to the distribution of his particular exam.

To understand this, it helps to visualize the situation. Here is Duane’s exam score relative to the distribution of his particular exam:

And here is Debbie’s exam score relative to the distribution of her exam:

Notice how Debbie’s score is closer to her population mean compared to Duane. Although she has an overall higher score, her z-score is lower simply because the mean score on her particular exam is higher.

This example illustrates why z-scores are so useful for comparing data values from different distributions: z-scores take into account the mean and standard deviations of distributions, which allows us to compare data values from different distributions and see which one is higher relative to their own distributions.