All forecasts suffer from some degree of uncertainty, and with the use
of ensemble prediction and also statistical methods we are increasingly
developing the capability to estimate this uncertainty objectively. For
some forecast users it is sufficient to simply take the standard
forecast as the best possible estimate of what will happen, but many
users could potentially benefit more by understanding the uncertainty
and assessing the risks. This is perhaps most easily seen when there is
a small but real risk of severe weather. One of the best ways to
express uncertainty in a consistent and verifiable way is as
Probability Forecasts. A probability forecast specifies how likely a
defined event is to occur, as a percentage, and can help users to
assess the risks associated with particular weather events to which
they are sensitive.
The most important issue with a probability forecast is that both the
forecaster and the user must understand exactly what the probabilities
mean. Probabilities must be issued for a clearly defined event which
either occurs or does not occur. For example, a statement that there is
"a 30% probability of rain in Scotland" is meaningless because it is
not clear whether it is for a specific place or just somewhere in
Scotland, there is no time given and it is not stated how much
rain. Examples of well-defined probability forecasts could be:
It is generally easier to define events and verify them unambiguously
for specific locations, but as the second example shows it is also
possible to define probabilities covering regions. The third example
illustrates how even quite a low probability can give a useful warning
of a serious event likely to lead to significant disruption. Even though
there is a 90% probability that the event will not occur, knowledge of
the 10% risk enables users to be prepared for the worst rather than
being caught out.
- 30% probability of more than 5mm of rain at Edinburgh
Airport between 1200 and 1800.
- 70% probability of wind reaching gale force in at
least one place in Scotland on Tuesday.
- 10% probability of wind sufficient to cause severe
structural damage in London overnight.
Back to top
| Probabilities from Ensembles
Ensembles are designed to estimate probabilities by sampling the range
of possible forecast outcomes. We estimate the probability of a
particular event by counting the proportion of ensemble members which
forecast that event to occur. Taking the first forecast example above,
30% would result when 15 out of 50 ensemble members predict more than
5mm of rain to fall at the specified location in the defined period.
In practice this method does not always give reliable probabilities,
especially when we look at detailed local weather. For this reason the
Met Office calibrates probability forecasts from the ensemble to further
improve the quality of the information provided.
Back to top
Use of probabilities can sometimes cause some confusion, and many
people are more familiar with Odds which are commonly used for betting.
The two are very closely related. For example, a probability of 10%
means 10 times out of 100, or a 1 in 10 chance. Thus for every 10
occasions the event will not
occur on 9 occasions and will only occur once. The Odds are therefore
Working in the opposite direction, if the Odds are 4:1 against an event
occurring, then this means that it will not happen 4 times as often as
it happens. So it will occur on 1 occasion in 5. Turning 1 in 5 into a
percentage gives 20%.
Back to top
| Can a probability forecast be wrong?
It is important to remember that the reason for issuing probability
forecasts is that it is often impossible to give a categorical yes/no
forecast with complete accuracy. A probability forecast instead
describes how likely an event is on a particular occasion. Thus it is
reasonable to ask whether a probability forecast can be wrong. For
example, if a probability is given as 10% and the event occurs, then is
this right, or wrong? One might think that it is wrong because the
probability was low but the event did occur, but this is the wrong
interpretation. Of all the times that a 10% probability is issued, the
event should happen 1 time in 10. Thus we can never say whether a
single probability forecast is right or wrong. We can only measure how
good our probability forecasts are by looking at a large set of
forecasts. Then we can group all the 10% forecasts together and check
that the event occurred on 1 in 10 of these occasions; similarly for
the 70% forecasts, it should occur on 7 in 10, etc. Results from
verifying a large number of forecasts can be plotted in a Reliability
Diagram - for a perfect set of probability forecasts the plotted
line will lie along the black diagonal line. Click here for further explanation of this
There is one rather trivial exception to the general rule that
probability forecasts cannot be wrong. The only time a single
forecast can be wrong is if the issued probability is either 0% or 100%,
which is equivalent to going back to a categorical forecast, and getting
it wrong! Thus in the graph below, which shows an example of
probabilities of different temperatures at Heathrow from February 2004,
the forecast would be "wrong" if the actual temperature was greater than
13 Celsius or below -4 , but for any other temperature it is "right".
Back to top
|Probabilities and false alarms
As noted above, if the probability is 10% then the event will only
occur on 1 occasion in every 10 (or equivalently 10 in 100). This means
that on the other 9 out of 10 occasions the event will not occur. Thus
if a user asks the Met Office to warn them every time there is a 10%
risk of a particular event, then they should expect that 9 times out of
10 that a warning is issued the event will not occur. If the user does
not understand this then they are likely to think the Met Office is
issuing too many False Alarms,
or to quote the fairy tale, "crying wolf". On the other hand, if the
user is liable to suffer a large loss by being unprepared for the
event, then they may well benefit from putting up with 9 out of 10
false alarms because of the large benefit from being prepared on the 1
in 10 occasion when the event does occur.
Back to top
|Decision-making with probability forecasts
To make best use of the probability forecasts, the user must choose a
probability threshold which gives the correct balance of alerts and
false alarms for their particular application. Consider two examples:
Both these users will take the same probability forecasts from the Met
Office, but they will respond to them in different ways. User B will
react at low probabilities, perhaps anything more than 20%, whereas User
A may only take action when the probability reaches 80%. The precise
level at which each user should start to react depends on their cost of
protection and their potential losses - advice can be offered in how to
maximise the benefit of the forecasts for any particular application.
- User A is liable to suffer a loss when a particular
weather event occurs so they would like to be able to protect themslves.
However actually protecting themselves is also expensive (but less
expensive than being unprotected when an event occurs), so they should
only protect themselves when the probability of the event is high.
- User B is sensitive to the same weather event but is
liable to suffer a much larger loss than User A, but with a warning can
protect themselves quite cheaply. This user should therefore protect
themselves at much lower probabilities. They will get a larger number of
false alarms but have the best chance of being protected when an event
Back to top