Anyone who has ever been involved in working with the financials of a company, in any capacity, has had to deal with forecasting. Whether establishing the AOP or identifying manpower requirements, there’s a need to project (meaning “predict”) future needs, associated costs, and their impact on revenues since Revenue – Cost = Profit.
Much like Gary’s stone, there’s often a failure to understand what forecasting is. Gary’s stone isn’t doing any forecasting – it’s reporting actual conditions. Reporting is a statement of the facts, forecasting is a prediction of the future.
When asked to forecast, however, a lot of managers will tend to drag their feet and stall, especially if there’s bad news involved. In the worst cases, they’ll sandbag estimates and deliver less-than-accurate forecasts until the situation has passed, and all that’s left to “forecast” is the actual cost history.
This situation is usually indicative of an environment where planning is scarcely more than an afterthought, and fire fighting is the norm. Forecasting has to do with long-term vision and strategy, measurement, and learning. Focusing on reporting without planning leads to delayed information and chronic “hot buttons” that require immediate attention.
When this occurs, the PDCA cycle is simply broken. The end result is a system where the people in the organization are in a constant state of “Do!” and “Act!” without any sense of why they are doing anything, or if their efforts have actually caused an improvement. There’s certainly no ability to measure against plan since it never existed, and everything is left to subjective interpretation.
What a forecast is: a best-guess assessment of likely events, that determine needs, which are expressed as costs. They form the basis for determining what activities will occur, and generate a mark against which those activities can be measured. A forecast is not an iron-clad, crystal ball prediction of future events. Believing it is so causes the foot-dragging and unwillingness to make an estimate – not to mention senior managers who expect forecasts to be achieved to the penny.
So how can we create better estimates? What are your thoughts?
Copyright © 2011 David Kasprzak
Hi, Alan – thanks for your comments.
The problem, however, is usually one of details and measurement. For ‘quantifiable inputs’ you have to ask – what inputs? If they are quantifiable, how are they measured? Are those metrics valid for interpreting or adjusting the items we’re forecasting? I’ve seen folks utilize hours-per-unit calculations based on historical averages, however, for one reason or another the current environment necessitated a very different hours per unit standard. People were using quantifiable, historical and accurate data, and yet the forecasts were always incorrect.
I’m always wary of strictly quantifiable inputs, since they don’t take into account the qualitative information that more often than not points to the root cause of the performance variance.
I’ve seen a lot of forecasts being constructed to fall into a range between two values. One value is the level where management will believe that the forecast isn’t aggressive enough. The second is the point where management will question why those numbers were not achieved.
A good forecast should be based on a variety of quantifiable inputs. Weightings of these inputs should be adjusted as feedback is received on the accuracy of the results.