Every business uses metrics to assess various aspects of its performance. Some – usually the smallest and least diversified – may focus exclusively on the most basic financial measures. Others may be found at the opposite end of the spectrum, tracking a multitude of metrics across the entire organization – finance, operations, sales & marketing, human resources, research & development, and so on. The more extensively metricated organization is not necessarily more efficiently operated or more effectively managed, however. The administration of a metrics system incurs costs that must be balanced with its utility for it to be valuable to an organization.
An efficacious metrics system can greatly facilitate an organization’s management and improvement; a misguided one can be detrimental, in numerous ways, to individuals, teams, and the entire organization. The structure of a well-designed metrics system is influenced by the nature of the organization to be monitored – product vs. service, for-profit vs. nonprofit, public vs. private, large vs. small, start-up vs. well-established, etc. Organizations often choose to present their metrics systems according to popular templates – Management by Objectives (MBO), Key Performance Indicators (KPI), Objectives and Key Results (OKR), or Balanced Scorecard – but may choose to create a unique system or a hybrid. No matter what form it takes, or what name it is given, the purpose of a metrics system remains constant: to monitor and control – that is, to manage – the organization’s performance according to criteria its leaders deem relevant.
Why Institute a Metrics System?
Given the inherent risks in creating a metrics system – specifically, getting it wrong – it is understandable that some managers may be reluctant to do so. Without risk, however, there is no reward; the reward for properly executing a metrics system can be substantial. A suitable set of metrics provides managers with the tools necessary to understand various aspects of the organization’s performance. Insights made possible by metrics can help managers identify improvement opportunities in product design, process performance, marketing strategies, and other functions. Metrics can elucidate the best time to update a technology or effect some other shift in strategy. Decisions made without the benefit of metrics rely, to a great extent on guesswork. Educated guesswork, one would hope, but guesswork nonetheless.
In summary, an operative metrics system performs the following functions:
Categories of Metrics
The inventory of potential metrics is seemingly endless; it is, therefore, impractical to attempt to provide a comprehensive catalog here. Instead, a more manageable discussion of metrics categories, with a small number of examples, will be presented. Inevitably, this, too, is incomplete, as any organization’s circumstances may cause a unique grouping of metrics to be advantageous for its management.
Internal and External Metrics
Internal metrics measure things that the organization controls, such as flow time, process yield, or utilization. External metrics measure things that effect the organization, but are not directly controlled by it. Examples include supplier lead times, material costs, and stock price.
Many external metrics are important to track because of their influence on internal metrics. For example, supplier lead times and material costs impact a manufacturer’s ability to accurately forecast customer deliveries and to price products appropriately.
Supplier and Customer Metrics
Supplier and customer metrics are subsets of external metrics, but significant enough to justify special mention. Supplier performance effects sourcing decisions; reliable data is required for effective supply chain management. On-time delivery, quality level (i.e. number of rejects), and total cost of procurement are typical metrics used to inform supplier selection decisions.
Customer metrics include Net Promoter Score (NPS), number of warranty claims, acquisition cost, and lifetime value. A company can use this data to determine if its offerings and marketing strategies are engaging its target market.
Just as external metrics can influence internal metrics, they can also influence other external metrics. Supplier and customer metrics often exhibit a pass-through relationship. For example, increasing warranty claims and other customer complaints may be correlated with declining supplier performance metrics (see discussion of leading indicators, below). This discovery would support a decision to purchase the effected material from a different supplier if the current supplier’s performance does not rebound quickly.
Financial and Operational Metrics
All matters effecting the economic viability of an organization are tracked with financial metrics such as contribution margin, cost of goods sold (COGS), or payroll as a percentage of sales. Operational metrics are used to measure an organization’s performance in manufacturing products or providing services, such as reject rates or customer satisfaction scores.
Many managers believe that “taking care of” the operational metrics will lead to “good” financial performance. While a strong case can be made for this position, it reflects an incomplete understanding. Successful operations are critical to an organization’s financial performance, but there are also influences that operational metrics will not capture.
Enterprise and Divisional Metrics
Enterprise metrics measure the performance of the entire organization, be it a single-location business or a multinational concern consisting of several divisions operating numerous facilities worldwide. Each of the categories and examples discussed above could serve as enterprise or divisional metrics. A small organization’s divisional metrics may measure departmental performance, while larger enterprises may track the performance of individual facilities or business units. The larger the organization, the more layers of divisional metrics there are likely to be. Metrics for each layer are typically generated by aggregating data from the previous layer; the final aggregation provides the enterprise-level performance metrics.
Short-, Intermediate-, and Long-Lag Metrics
Long-lag metrics, such as market share, are used by executives to assess the effectiveness of strategic decisions characterized by delayed feedback. Intermediate-lag metrics have shorter feedback loops, but significant delays remain. Therefore, cost variances and customer complaints, for example, require investigation to link them to root causes. Long- and intermediate-lag metrics are best utilized to monitor trends, due to their inherent delayed-feedback mechanism.
Short-lag metrics provide the most immediate feedback; ideally, this would be in real-time. Cycle time, scrap and rework, safety incidents, and so on provide information that managers can act on swiftly. Short-lag metrics are common inputs used in continuous improvement efforts.
As an aside, the best-case scenario would use “negative-lag” metrics (i.e. leading indicators), as these would allow proactive management. That is, issues could be addressed in advance, mitigating negative consequences. Unfortunately, leading indicators are difficult to find in most contexts, though possibilities exist in technology trends, regulatory notices, and safety-related concerns. These may be more “signs” than metrics; nonetheless, they should be monitored any time they can be identified.
Your organization’s custom-built metrics system may use some or all of these categories. You may also define sets of metrics using completely different terminology, particularly if you are creating a hybrid system. The names used are not important, so long as they convey the purpose of the metrics or origins of the data from which they are derived.
Caveats and Limitations
In order to develop and maintain an efficacious metrics system, managers must be aware of how it can fail. The system must be designed to prevent these failures and monitored for signs of misuse or abuse.
A common flaw in the design of metrics systems is the positioning of two monitored groups in opposition to one another. A typical result of this condition is for each group to expend more effort attempting to transfer responsibility – or blame – for negative outcomes to the “opposing” group than on improving performance. In this scenario, the goal becomes having better “numbers” than the other group, thus avoiding the spotlight.
This situation frequently occurs when separate groups are responsible for “supplier quality” and “product quality.” Product Quality (PQA) strives to convince management that any defects discovered existed in the supplied material and were not created by the process for which they are responsible. Supplier Quality (SQA), on the other hand, strives to convince management that control of incoming material is sufficient to prevent defective material from reaching production operations. While both parties are politicking to preserve acceptable metrics – and a lighter workload – neither is searching for a root cause or improvement strategy.
A similar face-off occurs between many production and maintenance departments. To meet production requirements, managers will continue to run equipment that has been scheduled for service. As a result, the maintenance department’s PM compliance metric suffers. Poorly maintained equipment causes additional failures; the resulting production shortfall is recovered during scheduled maintenance periods. Repeating this pattern results in a downward spiral of both production and maintenance performance metrics; arguments and finger-pointing ensue, unabated.
The irony of the examples above is that the goals of the opposing groups are exactly the same. It is a poorly-designed and unmonitored metrics system that has placed them in opposition. Both groups, and the entire organization, suffer as a result. The metrics system has created a principal-agent problem.
The principal is the organization as a whole, or a manager, at some level, that has assigned responsibility for some measure of performance – the metric. The agent is the person or group accountable for the metric. In the SQA vs. PQA example, the principal wants to ensure that all incoming material and outgoing product are defect-free. The agents, SQA and PQA, however, each identify a scapegoat that undermines the principal’s objectives. A principal-agent problem is defined by the opportunity of an agent to pursue its own objectives, whether or not the principal’s agenda is supported. The principal’s task now becomes policing the assignment of responsibility for defects among the agents; instead of achieving the desired results, the metrics system has created more work.
A metrics system can be data-intensive; a large amount of data may need to be collected, analyzed, formatted, and presented. Carelessness at any stage can invalidate the conclusions reached by utilizing the data, jeopardizing the success of any initiatives it was intended to support.
As data collection and analysis becomes ever-simpler, a new set of potential problems emerges. A vast array of readily-available data may increase the difficulty of “separating the vital few from the trivial many.” A critical requirement of an efficacious metrics system is to maintain a manageable set of the most important metrics.
Some responses to a deluge of data are self-defeating. Before substituting the average of a measure for other parameters, for example, be sure it is useful and appropriate. Learning that your vacation destination has a 70° F average temperature does nothing to prepare you for the 100° F day or the 40° F night!
Aggregation is another method of reducing the load of processing a large dataset. It also has the potential to distort the message or limit the utility of the data. Overall Equipment Effectiveness (OEE) is an aggregate measure that is useful as a comparative tool. It has very little value, however, in crafting improvement strategies until it is disaggregated into its component measures (availability, quality, and productivity). As always, the key is to use the right measure at the right time.
Another misuse of metrics committed more frequently with the proliferation of data-collection tools relates to lag (see “Categories of Metrics” section above). A steady stream of incoming data may create the perception of real-time measures of performance. However, this may only be true for a small fraction of the data received. Therefore, it is important to temper one’s response to data with knowledge of its lag. Reactions to data, in the form of process adjustments or procedural modifications, should occur less frequently as a measure’s lag increases. As mentioned earlier, longer-lag measures are more useful for monitoring trends and supporting long-term strategy decisions.
Perhaps the most pernicious use of metrics is as the exclusive determinant of an employee’s reward or punishment. Many times, employees are assigned responsibility for metrics that include factors outside their control. In addition to the lowered morale this typically causes, it also exacerbates the principal-agent problem, as employees seek ways to manipulate performance measures they cannot control by normal means.
Metrics can also lull an organization into a false sense of security or superiority. When goals have been reached and no obvious threats have appeared on the horizon, managers can become complacent. Concluding that an organization has “reached the end” of its improvement journey, based on satisfactory metrics, is truly perilous. (See “Is World Class Good Enough?”)
Many of the dangers of metrics systems can be explained by the Hawthorne Effect – the phenomenon of changes in behavior due to a person’s awareness of being observed. The Hawthorne Effect can lead to both positive and negative outcomes. Applied to metrics system design, it explains why metric selection is so critical. Establishment of a metric informs members of an organization that behavior related to this measure is being observed. Effort is then expended to assure favorable assessments. If not carefully designed, however, a metrics system can cause limited resources to be diverted from valuable activities to trivial ones. Adversarial relationships can also be created, as in the SQA vs. PQA example.
Readers of Goodhart’s Law could conclude that no metrics system will ever achieve its stated objectives, though this would be short-sighted. Goodhart’s original statement, in reference to monetary policy:
“Any observed statistical regularity will tend to collapse once pressure is placed upon it for control purposes.”
A popular reformulation states it more generally and simply for the layman:
“When a measure becomes a target, it ceases to be a good measure.”
This is akin to Heisenberg’s Uncertainty Principle from quantum mechanics. A generalized lay version of the uncertainty principle:
“The act of measuring changes the measurement.”
Though conceived in disparate fields of study, both are directly applicable to organizational metrics systems and reinforce the discussion of the Hawthorne Effect. To be successful, managers must be aware of the potential impacts assessments have on the activities and outcomes being evaluated and design, monitor, and adjust accordingly.
Metrics systems are, by definition, quantitative in nature. Overreliance on numeric metrics is detrimental to an organization and its members. Periodic reviews of metrics entice managers to adopt a short-term focus. Benefits of long-term development and qualitative measures are often overlooked. An employee’s hard work, dedication, willingness to learn and to teach others, creativity, and positive influence on colleagues will likely never be captured by a typical metrics system. This should never preclude them from consideration in performance reviews or development plans; a metrics system cannot provide all the information needed to develop and optimize an organization.
“Beware the Metrics System – Part 2” will present several industry examples and best practices for developing and managing metrics systems. In the meantime, feel free to contact JayWink Solutions to discuss any questions or concerns you may have regarding the use of metrics.
[Link] “To Measure is to Know,” Susan Leister and Suzanne Tran. Quality Progress, November 2015.
[Link] “Psychological Impact of Metrics,” Duke Okes. The Journal for Quality & Participation, January 2013.
[Link] The Tyranny of Metrics, Jerry Z. Muller. Princeton University Press, 2018.
[Link] “The Balanced Scorecard – Measures that Drive Performance,” Robert S. Kaplan and David P. Norton. Harvard Business Review, January-February 1992.
[Link] “Time-Relevant Metrics in an Era of Continuous Improvement: The Balanced Scorecard Revisited,” Richard J. Schonberger. Quality Management Journal, Vol. 20, No. 3, 2013.
[Link] “Are Manufacturers’ KPI Reporting Practices Keeping Up?” Holly Lyke-Ho-Gland. IndustryWeek, June 4, 2019.
[Link] “It’s All in the Numbers – KPI Best Practices,” Lee Schwartz. IndustryWeek, March 24, 2015.
[Link] M. Schrage and D. Kiron, “Leading With Next-Generation Key Performance Indicators,” MIT Sloan Management Review, June 2018.
[Link] “Does Management by Objectives Stifle Excellence?” John Dyer. IndustryWeek, December 17, 2013.
[Link] “The OPTIMAL MBO: A model for effective management-by-objectives implementation,” Sharon Gotteiner. European Accounting and Management Review, Vol. 2, No. 2, May 2016.
[Link] “OKRs.” Medium.com, November 28, 2014.
[Link] “OKR: Should you use them for setting goals?” Claire Lew. Knowyourteam.com, July 18, 2019.
[Link] “On the folly of rewarding A, while hoping for B,” Steven Kerr. The Academy of Management Executive, February 1995.
[Link] “Goodhart’s law,” Wikipedia.com.
[Link] “Goodhart’s Law and Why Measurement is Hard,” David Manheim. Ribbonfarm.com, June 9, 2016.
[Link] “Overpowered Metrics Eat Underspecified Goals,” David Manheim. Ribbonfarm.com, September 29, 2016.
[Link] “Our Metrics Fetish – And What to Do About It,” David Shaywitz. Forbes, June 23, 2011.
[Link] “We Are Not a Dashboard: Contesting the Tyranny of Metrics, Measurement, and Managerialism,” David Shaywitz. Forbes, December 24, 2018.
[Link] “The burden of checklists and the importance of core metrics,” Bill Gardner. Theincidentaleconomist.com, May 20, 2015.
[Link] “Five Categories to Focus Your KPIs,” Michael Schrage. MIT Sloan Management Review, September 21, 2018.
[Link] “Manufacturing KPIs: How Do Yours Compare?” Jill Jusko. IndustryWeek, June 24, 2019.
[Link] Selecting the Right manufacturing Improvement Tools: What Tool? When? Ron Moore. Elsevier, 2007.
[Link] “Standing Out from the Start,” Lindsay Scott. PM Network, June 2019.
Jody W. Phelps, MSc, PMP®, MBA
JayWink Solutions, LLC
If you'd like to contribute to this blog, please email firstname.lastname@example.org with your suggestions.
© JayWink Solutions, LLC