Most enterprises do not have well-designed performance measurement systems. Many organizations can site department or enterprise goals but lack effective measurement of their progress toward those goals. Too many businesses lack goals of any kind, other than vague statements such as “improve customer satisfaction.” There may be active performance measurement systems in place, as well as improvement initiatives, but no one is monitoring the results or holding anyone accountable for them. Other problems we commonly find with performance measurement systems include:
If there are operational metrics in place, they lack value.
There is a surprising lack of understanding of metrics in business, their purpose, and greatest use. Many organizations measure what they think they should or what they can easily capture, neither of which collects useful data for improvement efforts.
Metrics are not aligned with the needs of the immediate customer.
Rather than measure what is easy, measure what matters most. Every serious performance decision must start by understanding the extent to which the primary customers’ needs are met and at what cost.
The cost of poor output quality hides true output costs.
Quality output is that which successfully meets the primary customer’s needs. Ignoring the expense of resources consumed by failing to meet that need hides significant opportunities for improvement.
Data is not trended or analyzed.
Why even bother to collect data if there is no effort to see what that data says about performance? Too often we see organizations collecting and reporting data without spending any effort on understanding what the data says.
Common metrics and KPI’s are lagging indicators resulting in a reactive response.
Tracking both leading and lagging indicators is critical to understanding what is going on in an organization and what actions need to be taken. Lagging indicators measure what already happened and provide context when trended or analyzed.
For example, the number of qualified leads provided to a sales pursuit group by a lead generation group gives an understanding of lead generation output, a lagging indicator for the lead generation group. The sales pursuit group then takes action to produce a sale from the leads provided. Measuring those successful sales is a lagging indicator for the sales pursuit group. In this example, leading sales pursuit indicators would include measures of what is done with the leads received, such as mailings, calls, appointments for calls or meetings, presentations. Measuring actions taken on leads relative to the volume of leads received and relative to successful sales allows the sales pursuit group to take meaningful actions in response to leading indicators and see the results in their lagging indicators.
There is no time set aside for investigative analytics.
Measurement data should lead to problem-solving or improvement actions. Of course, there often is a need to dig deeper into what the data indicates. Without investigative analytics, such as Root Cause Analysis, or Pareto Analysis for prioritization, meaningful data will go to waste in the organization.
The financial impact of activities is not valued by leadership.
As difficult as it may be to believe that a business’s leadership does not value the financial impact of activities within the business, it is objectively true if cost or resource consumption is not a key metric. Process cost can be complicated to capture and require transparency that many organizations balk at. It stands, however, that if an organization does not know what it costs to produce an invoice then its leadership does not care about that cost.
Return on investment is not considered in measurement design.
Return on investment is an important metric for any initiative or investment undertaken. Investment decisions must include an expected return in all aspects of business; and the ability and/or timeline for that return should be measured.
The difficulty of collecting the required data for a metric is a predictor for failure.
The best metrics will capture meaningful data and do so with the absolute minimum of process interruption. The harder it is to capture and report metric data, the more likely the data will not be captured, and the metric will fail. Complicated data requirements virtually guarantee collection failure or, if collection is automated, there is a high likelihood that those who would use the data to take improvement action will not understand what the data indicates.
Data is not available in real-time.
If the measurement data is not available in a timely manner – ideally in real-time – it essentially becomes obsolete before it can be put to good use. Basing actions on obsolete data exposes the organization to a number of risks, including the possibility that actions will produce results counter to expectations and may do more damage than good.
The lack of automation and tools can make meaningful measurements seem impossible.
Some metrics, especially in an environment where there are no metrics in place, can be initially captured using extremely low-tech methods. For the sake of accuracy, accessibility, ease of use, and meaningful analysis, fitting the right tools and automation to measurement design is key to successful metric implementation
By broadly assessing the most essential performance elements and then aligning performance measures with strategic purpose and intent, the organization leader can drive best-in-class performance that provides a sustainable competitive advantage.