
Uncertainty in Weather Forecasts


Can we offer more information? 

Everyone knows that weather forecasts go wrong sometimes! Reasons
for this vary, but the chaotic nature of the atmosphere means
there is always a limit to what we can predict accurately.
When we issue a forecast we usually give our best estimate of
what will happen, but often we know more about the confidence
or uncertainty surrounding that forecast. In television forecasts
the presenter will often express some of the uncertainty in words,
but time constraints limit how much detail can be given. In
many other forecasts, such as those available freely through the
internet, only the bestestimate is provided. So what more can
we offer? 

Uncertainty can be
expressed in several ways:
1. Verbal Expressions
Weather forecasters express uncertainty using many forms of
words. Examples for rain could include:
 rain at times
 scattered showers, mainly in the NW
 up to 50mm in places
 risk of heavy bursts
Endusers are often most interested in specific events which
might disrupt their normal activities, and then the forecaster
might express a risk with phrases like:
 only a small chance of...
 a high probability of disruption to...
 a risk that some places might get...
 I cannot guarantee you won't get..., but it is probably
worth the risk.
In order to make a statement like the last example, the forecaster
requires some knowledge of the user's application, and in particular
the risks and dangers for the user of the event happening. The
forecast given to one user may therefore not be appropriate for
another. A second user interested in the same weather event for
a different application might get:
 I cannot be sure you'll get..., but I wouldn't risk it if
I were you!
All the examples above give some impression of how likely the
enduser is to experience certain types of weather, but none of
them is very precise. because they are imprecise, it is difficult
to say whether such forecasts are accurate or not. The following
examples show how uncertainty can be expressed more precisely.
Attaching numbers to the confidence or uncertainty can allow the
user to assess the risks more accurately, and it also allows the
forecasts to be assessed more reliably.
2. Confidence Range
A range of values can be given, such as "Temperature between
3 and 7 Celsius". For some customers we thus provide a
mostprobable temperature, along with upper and lower bounds.
These bounds would be given with a level of confidence agreed
with the enduser, for example "We are 95% certain that the
temperature will be between 3 and 7 Celsius" for which the user
should expect that the actual temperature will fall outside
the range given on around 5% of occasions, or once in every
20 forecasts.
The graph below gives an example of how confidence ranges can
be presented to users. Maximum and minimum temperatures for
each day at a given location are given a range of uncertainty.
The full length of each vertical line represents the 95% confidence
range, while the central bar represents a 50% confidence range.
The horizontal line across this bar is the most likely temperature.Thus
for the first night we can be 95% certain the minimum temperature
will be between 8 and 13 Celsius, and 50% certain it will be
between about 11 and 12 Celsius. It is interesting to note how
the uncertainty increases further ahead in the forecast.
3. Probability Forecasts
Where the enduser of a forecast is interested in the risk
of a particular event occurring, this can be expressed as a
probability. The second list of examples above could then become
something like:
 a 10% chance of...
 80% probability of disruption to...
 a oneinfive chance that Heathrow Airport might get...
 a 5% risk of... so it is probably worth the risk.
Note that in the 3rd example we have not only attached a number
but also been specific about the risk at a particular location,
which makes interpretation for the user much easier. In
the last example the risk can be justified by the number, depending
of course on the user's sensitivity. 
How we estimate uncertainty 

The Met Office uses various techniques to estimate the uncertainty
in forecasts. In particular the development of socalled "ensemble
forecasts" allows us to estimate many uncertainties automatically
and provide extra information to customers in routine forecast
products. In an ensemble forecast, instead of running our
computer forecast model just once, we run it many times from slightly
different starting conditions to assess how certain or uncertain
the forecast is. We can estimate the risk or probability of a
given weather event from the proportion of these forecasts which
predict the event to occur.
Ensemble forecasts available at the current time are bestsuited
to estimating uncertainty in forecasts between 3 and 10 days ahead.
For customers with shorterrange requirements 12 days ahead we
are currently researching new ensemble methods, but we can also
offer statisticallybased estimates of uncertainty. Whatever the
method, the types of products we can offer are similar.
Back to top 

Use of probabilities can sometimes cause some confusion, and many
people are more familiar with Odds which are commonly used for
betting. The two are very closely related. For example, a probability
of 10% means 10 times out of 100, or a 1 in 10 chance. Thus for
every 10 occasions the event will not
occur on 9 occasions and will only occur once. The Odds are therefore
9:1 against.
Working in the opposite direction, if the Odds are 4:1 against
an event occurring, then this means that it will not happen 4
times as often as it happens. So it will occur on 1 occasion in
5. Turning 1 in 5 into a percentage gives 20%. Back to top 
Probabilities and False Alarms 

As noted above, if the probability is 10% then the event will
only occur on 1 occasion in every 10 (or equivalently 10 in 100).
This means that on the other 9 out of 10 occasions the event will
not occur. Thus if a user asks the Met Office to warn them every
time there is a 10% risk of a particular event, then they should
expect that 9 times out of 10 that a warning is issued the event
will not occur. If the user does not understand this then they
are likely to think the Met Office is issuing too many False
Alarms, or to quote the fairy tale, "crying wolf". On the
other hand, if the user is liable to suffer a large loss by being
unprepared for the event, then they may well benefit from putting
up with 9 out of 10 false alarms because of the large benefit
from being prepared on the 1 in 10 occasion when the event does
occur. Back to top 
Decisionmaking with Probability Forecasts 
To make best use of the probability forecasts, the user must choose
a probability threshold which gives the correct balance of alerts
and false alarms for their particular application. Consider two
examples:
 User A is liable to suffer a loss when a particular weather
event occurs so they would like to be able to protect themslves.
However actually protecting themselves is also expensive (but
less expensive than being unprotected when an event occurs),
so they should only protect themselves when the probability
of the event is high.
 User B is sensitive to the same weather event but is liable
to suffer a much larger loss than User A, but with a warning
can protect themselves quite cheaply. This user should therefore
protect themselves at much lower probabilities. They will
get a larger number of false alarms but have the best chance
of being protected when an event does occur.
Both these users will take the same probability forecasts from
the Met Office, but they will respond to them in different ways.
User B will react at low probabilities, perhaps anything more
than 20%, whereas User A may only take action when the probability
reaches 80%. The precise level at which each user should start
to react depends on their cost of protection and their potential
losses  advice can be offered in how to maximise the benefit
of the forecasts for any particular application.
Back to top


Many users of forecasts can benefit from understanding
better what the uncertainty in a forecast is. Here
are two examples:
 In the energy business a 1degree difference in
temperature can have a huge impact on demand, and
hence on the cost of gas or electricity. For a trader,
knowing in advance what the risk of the temperature
being one or two degrees warmer or colder than the
basic forecast value can give a valuable competitive
edge in the trading market. Some Met Office customers
in the energy sector benefit from getting upper and
lower bounds on their temperature forecasts, from
which they can estimate bounds on energy demand and
be better prepared for surges.
 For an engineer planning a delicate operation in
the offshore oil industry involving expensive equipment,
knowing the risks of largerthanexpected waves can
avoid expensive lastminute cancellations, or minimise
the risk of equipment damage due to deteriorations
during the operation. Customers can be provided
either with a range of uncertainty or with probabilities
of waves exceeding given heights.


For more information on any of the above products, please contact
the Met Office Customer Centre.
Back to top 


