A **Brier Score **is a metric we use in statistics to measure the accuracy of probabilistic forecasts. It is typically used when the outcome of a forecast is binary – either the outcome occurs or it does not occur.

For example, suppose a weather forecast says there is a 90% chance of rain and it actually does rain. We can calculate the Brier Score for this forecast using the following formula:

**Brier Score** = (f – o)^{2}

where:

f = forecasted probability

o = outcome (1 if the event occurs, 0 if it doesn’t occur)

In this example, the Brier Score for our forecast would be (0.9 – 1)^{2} = -0.1^{2} = **0.01**

A Brier Score for a set of forecasts is simply calculated as the average of the Brier Scores for the individual forecasts:

**Brier Score **= 1/n * Σ(f_{t} – o_{t})^{2}

where:

n = sample size (the number of forecasts)

Σ = a fancy symbol that means “sum”

f_{t} = forecasted probability at event *t*

o = outcome at event *t* (1 if the event occurs, 0 if it doesn’t occur)

A Brier Score can take on any value between 0 and 1, with 0 being the best score achievable and 1 being the worst score achievable. The lower the Brier Score, the more accurate the prediction(s).

**Examples of Calculating Brier Scores**

The following examples illustrate how to calculate Brier Scores.

**Example 1: A forecast says there is a 0% chance of rain and it does rain.**

Brier Score = (0 – 1)^{2} = 1

**Example 2: A forecast says there is a 100% chance of rain and it does rain.**

Brier Score = (1 – 1)^{2} = 0

**Example 3: A forecast says there is a 27% chance of rain and it does rain.**

Brier Score = (.27 – 1)^{2} = 0.5329

**Example 4: A forecast says there is a 97% chance of rain and it does not rain.**

Brier Score = (.97 – 0)^{2} = 0.9409

**Example 5: A weather forecaster makes the following predictions:**

Chance of Rain |
Outcome |
---|---|

27% | Rain |

67% | Rain |

83% | No Rain |

90% | Rain |

We can calculate the Brier Score for this set of predictions using the following formulas:

Chance of Rain |
Outcome |
Brier Score |
---|---|---|

27% | Rain | (.27-1)^{2} = .5329 |

67% | Rain | (.67-1)^{2} = .1089 |

83% | No Rain | (.83-0)^{2} = .6889 |

90% | Rain | (.90-1)^{2} = .01 |

Brier Score = (.5329 + .1089 + .6889 + .01) / 4 = **0.3352**.

**Brier Skill Scores**

A **Brier Skill Score **is a metric that tells us how well the Brier Score of a new forecasting model compares to an existing forecasting model. It is calculated as:

**Brier Skill Score** = (BS_{E} – BS_{N}) / BS_{E}

where:

BS_{E} = Brier Score of existing model

BS_{N} = Brier Score of new model

If a Brier Skill Score is positive, then the new model makes more accurate predictions. If the Brier Skill Score is negative, then the new model makes worse predictions. And if the Brier Skill Score is equal to zero, then the new model offers no improvement over the existing model.

For example, suppose our existing model has a Brier Score of BS_{E} = 0.4221 and our new model has a Brier Score of BS_{N} = 0.3352. The Brier Skill Score of our new model can be calculated as:

**Brier Skill Score ** = (0.4421 – 0.3352) / (0.4421) = **0.2418**.

Since this number is positive, it’s an indication that our new model provides more accurate forecasts relative to the existing model.

The higher the Brier Skill Score, the bigger the improvement is in the new model compared to the existing model.