There are some things that you know to be true, and others that you know to be false; yet, despite this extensive knowledge that you have, there remain many things whose truth or falsity is not known to you. We say that you are uncertain about them. You are uncertain, to varying degrees, about everything in the future; much of the past is hidden from you; and there is a lot of the present about which you do not have full information. Uncertainty is everywhere, and you cannot escape from it.
Dennis Lindley, Understanding Uncertainty
Quantitative risk assessments are commonly used to determine time and cost contingencies for engineering and construction projects. Eager software jockeys often do these “black-box” assessments using one of the many available software packages that provide a large variety of probability distributions. But project teams often don’t grasp the underlying principles that drive successful quantitative risk assessments when selecting distributions or assigning ranges. One of the fundamental principles of risk quantification is differentiating between uncertainty and variation.
Imagine developing the perfect project plan. All the project team members and contractors agree with it and it gets baselined. When the project is implemented, there are no changes; every task is completed exactly as estimated, the durations and effort are exactly as planned, the quantities and rates don’t change, there are no over- or underruns, and there are no scope changes. The project is completed exactly on time and to the budget, and it achieves all its business objectives.
But we know this never happens. Project plans are never perfect, and things change during project implementation. The difference between the perfect project and reality is variation and our uncertainty about the variation.
Predicting future events is an essential part of human existence and we are constantly faced with situations that require us to estimate uncertain outcomes. These estimates range from simple scenarios such as the time it will take to get to work, to complex predictions of how long it will take to complete a project with thousands of tasks. A consistent observation about our ability to estimate future events is that we often get it wrong and there are countless examples of failed human endeavors to support this. A second, equally prevalent, observation is that we do not learn from our past estimation mistakes, and often repeat those mistakes.
To counter our inability to estimate accurately, we add contingencies to our estimates. These contingencies are often based on expert judgment or “gut feel”, or they may be the result of carefully crafted statistical models.
Uncertainty has been studied extensively in many fields, including economics, statistics, metrology, insurance, philosophy and physics. The definition of uncertainty often depends on the perspective and needs of the user. To some, uncertainty is closely associated with risk, while others may see it simply as the choice between preferences.
In projects, our interest in uncertainly is specific to the estimation of cost and time. But project teams frequently suffer from planning fallacy, which leads them to underestimate the quantity of work and the time required to perform activities, or the cost of performing the activities. Kahneman and Lovallo describe two views of estimators. The inside view considers the specific estimating problem on its own merits of time, effort and skills required. The outside view ignores the factors that could affect the specific problem and focuses on similar historical cases in the estimating process. In estimation, we have an intuitive preference for our own inside views, and we tend to treat each estimate as a unique situation that is different from previous projects. This approach leads to inconsistencies in our estimation attempts.
A person who travels to work every day has little control over the time it takes to complete the journey. There are numerous factors that affect this journey, such as traffic lights, slow traffic, queues at intersections, roadworks and the weather. The time of the journey will vary from day-to-day and the traveler has limited influence over this variation. This is the natural variation of the event and it has some underlying probability distribution. For instance, the probability that the journey will take 5 minutes may be very low, but the probability that it will take 90 minutes is also very low. If we record the actual journey times over many days, we may find that the journey has a distinct probability distribution, like the one shown below.
However, when a person is asked to estimate how long it will take to travel to work on an average day, she will not have the luxury of a carefully developed probability density function or graph and will assess her travel time under uncertainty. This means that she will construct an assessment of the variation in her head, based on her past experiences, and decide what the most likely travel time is. Her assessed travel time could be more, less or the same as the measured most probable time.
The table and graphs below show the results of an actual experiment in which five participants had to give an initial three-point estimate of their travel time to work. They then recorded their actual travel time over 25 days. The estimated journey times are shown as PERT-distributions and the actual times as normal distributions.
These graphs illustrate the difference between uncertainty and variation. Four of the five participants were more pessimistic about their estimated journey times and the ranges (the difference between the highest and lowest durations) of the actual journey times are also smaller than the estimate. The estimated journey duration is an estimate under uncertainty and the actual time is the real variation in the travel time over 25 days. This experiment illustrates the dilemma with any estimate, the uncertainty of knowing what the true distribution of the variation will be or, in the case of a single value, what the actual value will be when the future event occurs.
Uncertainty is present whenever a human observes some real event that exhibits variation. The variation may follow a predictable pattern or may be completely random. A human observer of the event may notice that the event has variation and may attempt to predict the pattern or quantity of variation. The observer will make assumptions about the behavior of the variation pattern. These assumptions may lead the observer to believe that there is less, more, or the same variation than the actual case. We say that the human assessment of the variation is uncertain.
Excessive over- or underestimation of variation always leads to problems since the observer’s uncertainty pattern does not match the variation of real-world events, and business decisions are based on the observer’s assessment of the variation and not the actual variation. But one cannot blame the observer for not knowing the actual distribution since it will be known only after the event has occurred. It is for this reason that quantitative risk assessments are important. They are the best tools we have available to predict variation and to highlight the extent of our uncertainty, provided that we understand how to use these tools correctly.
When risks are assessed on a project, the project team should take note of the following:
- Risk registers and risk heat maps do very little to guide our understanding of uncertainty, variation and responses to risk – quantitative risk assessments are essential for project decision making.
- Successful quantitative risk assessments require analysts to clearly separate uncertainty from variation.
- Uncertainty is a human condition, but variation is independent and is not affected by the presence of a human observer. Range estimation of time and cost reflects the uncertainty of a specific observer about the underlying variation.
- Project variation can be reduced through better project definition, for instance requirements definition, engineering and design, and planning.
- Project uncertainty can be reduced if team members understand the underlying factors that drive variation. It can also be reduced by comparing project estimates to similar previous projects through benchmarking, independent project reviews, and reference class forecasting.
Kahneman, D., & Lovallo, D. (1993). Timid choices and bold forecasts: A cognitive perspective on risk taking. Management Science, 39(1), 17-31.
Lindley, D. V. (2006). Understanding uncertainty. Hoboken, New Jersey: John Wiley & Sons.