Claims about the impact of sustainability initiatives – or the lack thereof – on a company’s financial performance are prevalent in media. Claims cover the spectrum from crippling, through negligible, to transformative. Articles making these claims target audiences ranging from corporate executives to non-industry activists, politicians, and regulators. Likewise, the articles cite vastly differing sources to support claims.
These articles are often rife with unsupported claims and inconsistencies, are poorly sourced, poorly written, and dripping with bias. The most egregious are often rewarded with “likes,” “shares,” and additional “reporting” by equally biased purveyors of “the word.” These viewpoint warriors create a fog of war that makes navigating the mine-laden battlefield of stakeholder interests incredibly treacherous.
The fog of war is penetrated by stepping outside the chaos to collect and analyze relevant information. To do this in the sustainability vs. profitability context, a group from NYU Stern Center for Sustainable Business have developed the Return on Sustainability Investment (ROSI) framework. ROSI ends reliance on the incessant cascades of conflicting claims, providing a structure for investigating the impacts of sustainability initiatives on an organization’s financial performance.
Interest in Zero-Based Budgeting (ZBB) is somewhat cyclical, rising in times of financial distress, nearly disappearing in boom-times. This can be attributed, in large part, to detractors instilling fear in managers by depicting it as a “slash and burn” cost-cutting, or downsizing, technique. This is a gross misrepresentation of the ZBB process.
ZBB is applicable to the public sector (various levels of government), private sector (not-for-profit and commercial businesses), and the very private sector (personal finances). Each sector is unique in its execution of ZBB, but the principle of aligning expenditure with purpose is consistent throughout.
This installment of “The Third Degree” describes the ZBB process in each sector, compares it to “traditional” budgeting, and explores its advantages and disadvantages. Alternative implementation strategies that facilitate matching the ZBB approach to an organization’s circumstances are also presented.
To conduct a Process FMEA according to AIAG/VDA alignment, the seven-step approach presented in Vol. VI (Aligned DFMEA) is used. The seven steps are repeated with a new focus of inquiry. Like the DFMEA, several system-, subsystem-, and component-level analyses may be required to fully understand a process.
Paralleling previous entries in the “FMEA” series, this installment presents the 7-step aligned approach applied to process analysis and the “Standard PFMEA Form Sheet.” Review of classical FMEA and aligned DFMEA is recommended prior to pursuing aligned PFMEA; familiarity with the seven steps, terminology used, and documentation formats will make aligned PFMEA more comprehensible.
To differentiate it from “classical” FMEA, the result of the collaboration between AIAG (Automotive Industry Action Group) and VDA (Verband der Automobilindustrie) is called the “aligned” Failure Modes and Effects Analysis process. Using a seven-step approach, the aligned analysis incorporates significant work content that has typically been left on the periphery of FMEA training, though it is essential to effective analysis.
In this installment of the “FMEA” series, development of a Design FMEA is presented following the seven-step aligned process. Use of an aligned documentation format, the “Standard DFMEA Form Sheet,” is also demonstrated. In similar fashion to the classical DFMEA presentation of Vol. III, the content of each column of the form will be discussed in succession. Review of classical FMEA is recommended prior to attempting the aligned process to ensure a baseline understanding of FMEA terminology. Also, comparisons made between classical and aligned approaches will be more meaningful and, therefore, more helpful.
Preparations for Process Failure Modes and Effects Analysis (Process FMEA) (see Vol. II) occur, in large part, while the Design FMEA undergoes revision to develop and assign Recommended Actions. An earlier start, while ostensibly desirable, may result in duplicated effort. As a design evolves, the processes required to support it also evolve; allowing a design to reach a sufficient level of maturity to minimize process redesign is an efficient approach to FMEA.
In this installment of the “FMEA” series, how to conduct a “classical” Process FMEA (PFMEA) is presented as a close parallel to that of DFMEA (Vol. III). Each is prepared as a standalone reference for those engaged in either activity, but reading both is recommended to maintain awareness of the interrelationship of analyses.
In the context of Failure Modes and Effects Analysis (FMEA), “classical” refers to the techniques and formats that have been in use for many years, such as those presented in AIAG’s “FMEA Handbook” and other sources. Numerous variations of the document format are available for use. In this discussion, a recommended format is presented; one that facilitates a thorough, organized analysis.
Preparations for FMEA, discussed in Vol. II, are agnostic to the methodology and document format chosen; the inputs cited are applicable to any available. In this installment of the “FMEA” series, how to conduct a “classical” Design FMEA (DFMEA) is presented by explaining each column of the recommended form. Populating the form columns in the proper sequence is only an approximation of analysis, but it is a very useful one for gaining experience with the methodology.
Prior to conducting a Failure Modes and Effects Analysis (FMEA), several decisions must be made. The scope and approach of analysis must be defined, as well as the individuals who will conduct the analysis and what expertise each is expected to contribute.
Information-gathering and planning are critical elements of successful FMEA. Adequate preparation reduces the time and effort required to conduct a thorough FMEA, thereby reducing lifecycle costs, as discussed in Vol. I. Anything worth doing is worth doing well. In an appropriate context, conducting an FMEA is worth doing; plan accordingly.
Failure Modes and Effects Analysis (FMEA) is most commonly used in product design and manufacturing contexts. However, it can also be helpful in other applications, such as administrative functions and service delivery. Each application context may require refinement of definitions and rating scales to provide maximum clarity, but the fundamentals remain the same.
Several standards have been published defining the structure and content of Failure Modes and Effects Analyses (FMEAs). Within these standards, there are often alternate formats presented for portions of the FMEA form; these may also change with subsequent revisions of each standard.
Add to this variety the diversity of industry and customer-specific requirements. Those unbeholden to an industry-specific standard are free to adapt features of several to create a unique form for their own purposes. The freedom to customize results in a virtually limitless number of potential variants.
Few potential FMEA variants are likely to have broad appeal, even among those unrestricted by customer requirements. This series aims to highlight the most practical formats available, encouraging a level of consistency among practitioners that maintains Failure Modes and Effects Analysis as a portable skill. Total conformity is not the goal; presenting perceived best practices is.
Thus far, the “Making Decisions” series has presented tools and processes used primarily for prioritization or single selection decisions. Decision trees, in contrast, can be used to aid strategy decisions by mapping a series of possible events and outcomes.
Its graphical format allows a decision tree to present a substantial amount of information, while the logical progression of strategy decisions remains clear and easy to follow. The use of probabilities and monetary values of outcomes provides for a straightforward comparison of strategies.
Choosing effective strategies for waging war against error in manufacturing and service operations requires an understanding of “the enemy.” The types of error to be combatted, the sources of these errors, and the amount of error that will be tolerated are important components of a functional definition (see Vol. I for an introduction).
The traditional view is that the amount of error to be accepted is defined by the specification limits of each characteristic of interest. Exceeding the specified tolerance of any characteristic immediately transforms the process output from “good” to “bad.” This is a very restrictive and misleading point of view. Much greater insight is provided regarding product performance and customer satisfaction by loss functions.
There is some disagreement among quality professionals whether or not precontrol is a form of statistical process control (SPC). Like many tools prescribed by the Shainin System, precontrol’s statistical sophistication is disguised by its simplicity. The attitude of many seems to be that if it isn’t difficult or complex, it must not be rigorous.
Despite its simplicity, precontrol provides an effective means of process monitoring with several advantages (compared to control charting), including:
Lesser known than Six Sigma, but no less valuable, the Shainin System is a structured program for problem solving, variation reduction, and quality improvement. While there are similarities between these two systems, some key characteristics lie in stark contrast.
This installment of “The War on Error” introduces the Shainin System, providing background information and a description of its structure. Some common problem-solving tools will also be described. Finally, a discussion of the relationship between the Shainin System and Six Sigma will be presented, allowing readers to evaluate the potential for implementation of each in their organizations.
Despite the ubiquity of corporate Six Sigma programs and the intensity of their promotion, it is not uncommon for graduates to enter industry with little exposure and less understanding of their administration or purpose. Universities that offer Six Sigma instruction often do so as a separate certificate, unintegrated with any degree program. Students are often unaware of the availability or the value of such a certificate.
Upon entering industry, the tutelage of an invested and effective mentor is far from guaranteed. This can curtail entry-level employees’ ability to contribute to company objectives, or even to understand the conversations taking place around them. Without a structured introduction, these employees may struggle to succeed in their new workplace, while responsibility for failure is misplaced.
This installment of “The War on Error” aims to provide an introduction sufficient to facilitate entry into a Six Sigma environment. May it also serve as a refresher for those seeking reentry after a career change or hiatus.
There is a “universal sequence for quality improvement,” according to the illustrious Joseph M. Juran, that defines the actions to be taken by any team to effect change. This includes teams pursuing error- and defect-reduction initiatives, variation reduction, or quality improvement by any other description.
Two of the seven steps of the universal sequence are “journeys” that the team must take to complete its problem-solving mission. The “diagnostic journey” and the “remedial journey” comprise the core of the problem-solving process and, thus, warrant particular attention.
“Beware the Metrics System – Part 1” presented potential advantages of implementing a metrics system, metric classifications, and warnings of potential pitfalls. This installment will provide examples from diverse industries and recommendations for development and management of metrics systems.
Every business uses metrics to assess various aspects of its performance. Some – usually the smallest and least diversified – may focus exclusively on the most basic financial measures. Others may be found at the opposite end of the spectrum, tracking a multitude of metrics across the entire organization – finance, operations, sales & marketing, human resources, research & development, and so on. The more extensively metricated organization is not necessarily more efficiently operated or more effectively managed, however. The administration of a metrics system incurs costs that must be balanced with its utility for it to be valuable to an organization.
An efficacious metrics system can greatly facilitate an organization’s management and improvement; a misguided one can be detrimental, in numerous ways, to individuals, teams, and the entire organization. The structure of a well-designed metrics system is influenced by the nature of the organization to be monitored – product vs. service, for-profit vs. nonprofit, public vs. private, large vs. small, start-up vs. mature, etc. Organizations often choose to present their metrics systems according to popular templates – Management by Objectives (MBO), Key Performance Indicators (KPI), Objectives and Key Results (OKR), or Balanced Scorecard – but may choose to create a unique system or a hybrid. No matter what form it takes, or what name it is given, the purpose of a metrics system remains constant: to monitor and control – that is, to manage – the organization’s performance according to criteria its leaders deem relevant.
An Introduction to the How and Why
Last year, I was invited to speak at a corporate “roundtable” on the subject of lightweighting. Though the host’s unfavorable terms compelled me to decline, I do not dismiss the topic as insignificant or unimportant. To the contrary, it is important enough to address here. For everyone. For free.
Lightweight design is increasingly critical to the success of many products. The aerospace and automotive industries are commonly-cited practitioners, but lightweighting is equally important to manufacturers of a wide variety of products. Running shoes, health monitors, smart watches (probably dumb ones, too), various tools, and bicycles all become more appealing to consumers when weight is reduced. Any product that is worn or carried for a significant time or distance, lifted or manipulated frequently, is shipped in large quantities, or is self-propelled is a good candidate for lightweighting.
Many manufacturing and service companies succumb to competitive pressure by embarking on misguided cost-reduction efforts, failing to take a holistic approach. To be clear, lean is the way to be; lean is not the same as cost reduction. Successful cost-reduction efforts consider the entire enterprise, the entire product life cycle, and, most importantly, the effects that changes will make on customers.
Many times, solutions to operations problems can be found in unexpected places. In some cases, the opportunity for improvement is only recognized when a superior example is discovered. A fast-food restaurant could easily be overlooked by other service providers and manufacturers seeking best practices. The examples below aim to demonstrate why this potential benchmark should not be so quickly dismissed.
To sustain successful operations, projects should be undertaken in an efficient and transparent manner. Efficiency improves the affordability of projects, increasing opportunities for growth. Transparency allows a broader range of input to refine a project plan, lowers resistance to change, and increases the probability of success.
The six steps below outline a process that can be used to ensure efficiency and transparency in operations projects. With each new initiative launched, these steps should be refined, applying experience gained in previous projects, to tune the process to the dynamics of your organization. After a few iterations, creating and implementing optimal solutions will begin to feel natural, and anything less, anathema.
If you haven’t already done so, I recommend reading 4 Characteristics of an Optimal Solution before proceeding to the six steps. As each step is executed, bear in mind how the activities described aid in achieving the four characteristics desired. If activity begins to stray from the process goals, reassess and adjust the tasks, participants, objectives, and evaluation methods to reestablish and maintain alignment.
In order to implement an optimal solution to your company’s product development, capacity expansion, cost reduction, continuous improvement, or other project objective, your project team must be able to evaluate alternatives on four key qualitative measures. Each qualitative evaluation is informed by quantitative and pseudo-quantitative measures and other qualitative judgments that will vary by project and objective. Interpretation of these measures is required to reach logical conclusions regarding the optimality of proposed solutions.
Upon completion of the initial evaluations of alternatives, there may be no clear winner, one determined to be best in all aspects. In this situation, another round of evaluation must be conducted to determine the best trade-off of benefits to pursue. It is imperative that the project team consider the potential motivations of influencers; interpersonal conflicts, personal agendas, or other “office politics” can provide perverse incentives that jeopardize the team’s success. Focusing on the merits of each alternative will limit undue influence on the final decision, providing maximum benefit to the company, its employees, and its customers.
Particularly prevalent among project evaluation shortcuts is to simply look for the alternative with the lowest initial cost. Unfortunately, that number is often misleading, misunderstood, or misquoted. Confidence in the accuracy of cost estimates is important, but initial cost remains but one criterion among many.
Four characteristics that form the basis for selection of optimal solutions are outlined in the following sections.
If you'd like to contribute to this blog, please email firstname.lastname@example.org with your suggestions.
© JayWink Solutions, LLC