A precedence diagram is a building block for more advanced techniques in operations and project management. Precedence diagrams are used as inputs to PERT and Gantt charts, line balancing, and Critical Path Method (topics of future installments of “The Third Degree.”)
Many resources discuss precedence diagramming as a component of the techniques mentioned above. However, the fact that it can be used for each of these purposes, and others, warrants a separate treatment of the topic. Separate treatment is also intended to encourage reuse, increasing the value of each diagram created.
A cause & effect diagram is best conceptualized as a specialized application and extension of an affinity diagram. Both types of diagram can be used for proactive (e.g. development planning) or reactive (e.g. problem-solving) purposes. Both use brainstorming techniques to collect information that is sorted into related groups. Where the two diverge is in the nature of relationships represented.
An affinity diagram may present several types of relationships among pieces of information collected. A cause & effect diagram, in contrast, is dedicated to a single relationship and its “direction,” namely, what is cause and what is effect.
An affinity diagram may stretch the definition of “map,” but perhaps not as much as it first appears. Affinity diagrams map regions of thought, or attention, within a world of unorganized data and information.
Mapping regions of thought in an affinity diagram can aid various types of projects, including product or service development, process development or troubleshooting, logistics, marketing, and safety, health, and environmental sustainability initiatives. In short, nearly any problem or opportunity an organization faces can benefit from the use of this simple tool.
Numerous resources providing descriptions of affinity diagrams are available. It is the aim of “The Third Degree” to provide a more helpful resource than these often bland or confusing articles by adding nuance, insight, and tips for effective use of the tools discussed in these pages. In this tradition, the discussion of affinity diagrams that follows presents alternative approaches with the aim of maximizing the tool’s utility to individuals and organizations.
Most manufactured goods are produced and distributed to the marketplace where consumers are then sought. Services, in contrast, are not “produced” until there is a “consumer.” Simultaneous production and consumption is a hallmark of service; no inventory can be accumulated to compensate for fluctuating demand.
Instead, demand must be managed via predictable performance and efficiency. A service blueprint documents how a service is delivered, delineating customer actions and corresponding provider activity. Its pictorial format facilitates searches for improvements in current service delivery and identification of potential complementary offerings. A service blueprint can also be created proactively to optimize a delivery system before a service is made available to customers.
Claims about the impact of sustainability initiatives – or the lack thereof – on a company’s financial performance are prevalent in media. Claims cover the spectrum from crippling, through negligible, to transformative. Articles making these claims target audiences ranging from corporate executives to non-industry activists, politicians, and regulators. Likewise, the articles cite vastly differing sources to support claims.
These articles are often rife with unsupported claims and inconsistencies, are poorly sourced, poorly written, and dripping with bias. The most egregious are often rewarded with “likes,” “shares,” and additional “reporting” by equally biased purveyors of “the word.” These viewpoint warriors create a fog of war that makes navigating the mine-laden battlefield of stakeholder interests incredibly treacherous.
The fog of war is penetrated by stepping outside the chaos to collect and analyze relevant information. To do this in the sustainability vs. profitability context, a group from NYU Stern Center for Sustainable Business have developed the Return on Sustainability Investment (ROSI) framework. ROSI ends reliance on the incessant cascades of conflicting claims, providing a structure for investigating the impacts of sustainability initiatives on an organization’s financial performance.
To be effective in any pursuit, one must understand its objectives and influences. One influence, typically, has a greater impact on process performance than all others – the dominant characteristic of the process. The five main categories of process dominance are worker, setup, time, component, and information.
Processes require tools tailored to manage the dominant characteristic; this set of tools comprises a process control system. The levels of Operations Management at which the tools are employed, or the skills and responsibility for process performance reside, differ among the types of dominance.
This installment of “The Third Degree” explores categories of process dominance, tools available to manage them, and examples of processes with each dominant characteristic. Responsibility for control of processes exhibiting each category of dominance will also be discussed in terms of the “Eight Analogical Levels of Operations Management.”
Effective Operations Management requires multiple levels of analysis and monitoring. Each level is usually well-defined within an organization, though they may vary among organizations and industries. The size of an organization has a strong influence on the number of levels and the makeup and responsibilities of each.
In this installment of “The Third Degree,” one possible configuration of Operations Management levels is presented. To justify, or fully utilize, eight distinct levels of Operations Management, it is likely that an organization so configured is quite large. Therefore, the concepts presented should be applied to customize a configuration appropriate for a specific organization.
Standards and guidelines published by industry groups or standards organizations typically undergo an extensive review process prior to acceptance. A number of drafts may be required to refine the content and format into a structure approved by a committee of decision-makers.
As one might expect, the draft review and approval process is not consistent for every publication. The number of drafts, time to review, and types of changes requested will vary. Though each review is intended to be rigorous, errors often remain in the approved publication. The content may also require interpretation to employ effectively.
This is certainly true of the aligned AIAG/VDA FMEA Handbook. In this installment of the “FMEA” series, the Handbook’s errors and omissions, opacities and ambiguities will be discussed. Where possible, mistakes will be corrected, blanks filled in, and clarity provided in pursuit of greater utility of the Handbook for all FMEA practitioners.
The AIAG/VDA FMEA Handbook presents standard and alternate form sheets for Design, Process, and Supplemental FMEA. The formats presented do not preclude further customization, however. In this installment of the “FMEA” series, suggested modifications to the standard-format form sheets are presented. The rationale for changes is also provided to facilitate practitioners’ development of the most effective documentation for use in their organizations and by their customers.
No matter how useful or well-written a standard, guideline, or instruction is, there are often shortcuts, or “tricks,” to its efficient utilization. In the case of the aligned AIAG/VDA Failure Modes and Effects Analysis method, one trick is to use visual Action Priority tables.
In this installment of the “FMEA” series, use of visual tables to assign an Action Priority (AP) to a failure chain is introduced. Visual aids are used in many pursuits to provide clarity and increase efficiency. Visual AP tables are not included in the AIAG/VDA FMEA Handbook, but are derived directly from it to provide these benefits to FMEA practitioners.
As mentioned in the introduction to the AIAG/VDA aligned standard (“Vol. V: Alignment”), the new FMEA Handbook, is a significant expansion of its predecessors. A substantial portion of this expansion is the introduction of a new FMEA type – the Supplemental FMEA for Monitoring and System Response (FMEA-MSR).
Modern vehicles contain a plethora of onboard diagnostic tools and driver aids. The FMEA-MSR is conducted to evaluate these tools for their ability to prevent or mitigate Effects of Failure during vehicle operation.
Discussion of FMEA-MSR is devoid of comparisons to classical FMEA, as it has no correlate in that method. In this installment of the “FMEA” series, the new analysis will be presented in similar fashion to the previous aligned FMEA types. Understanding the aligned Design FMEA method is critical to successful implementation of FMEA-MSR; this presentation assumes the reader has attained sufficient competency in DFMEA. Even so, review of aligned DFMEA (Vol. VI) is highly recommended prior to pursuing FMEA-MSR.
To differentiate it from “classical” FMEA, the result of the collaboration between AIAG (Automotive Industry Action Group) and VDA (Verband der Automobilindustrie) is called the “aligned” Failure Modes and Effects Analysis process. Using a seven-step approach, the aligned analysis incorporates significant work content that has typically been left on the periphery of FMEA training, though it is essential to effective analysis.
In this installment of the “FMEA” series, development of a Design FMEA is presented following the seven-step aligned process. Use of an aligned documentation format, the “Standard DFMEA Form Sheet,” is also demonstrated. In similar fashion to the classical DFMEA presentation of Vol. III, the content of each column of the form will be discussed in succession. Review of classical FMEA is recommended prior to attempting the aligned process to ensure a baseline understanding of FMEA terminology. Also, comparisons made between classical and aligned approaches will be more meaningful and, therefore, more helpful.
Suppliers producing parts for automotive manufacturers around the world have always been subject to varying documentation requirements. Each OEM (Original Equipment Manufacturer) customer defines its own requirements; these requirements are strongly influenced by the geographic location in which they reside.
In an effort to alleviate confusion and the documentation burden of a global industry, AIAG (Automotive Industry Action Group) of North America and VDA (Verband der Automobilindustrie) of Germany jointly published the aligned “FMEA Handbook” in 2019. Those experienced with “classical” FMEA (Vol. III, Vol. IV) will recognize its influence in the new “standard;” however, there are significant differences that require careful consideration to ensure a successful transition.
In the context of Failure Modes and Effects Analysis (FMEA), “classical” refers to the techniques and formats that have been in use for many years, such as those presented in AIAG’s “FMEA Handbook” and other sources. Numerous variations of the document format are available for use. In this discussion, a recommended format is presented; one that facilitates a thorough, organized analysis.
Preparations for FMEA, discussed in Vol. II, are agnostic to the methodology and document format chosen; the inputs cited are applicable to any available. In this installment of the “FMEA” series, how to conduct a “classical” Design FMEA (DFMEA) is presented by explaining each column of the recommended form. Populating the form columns in the proper sequence is only an approximation of analysis, but it is a very useful one for gaining experience with the methodology.
Prior to conducting a Failure Modes and Effects Analysis (FMEA), several decisions must be made. The scope and approach of analysis must be defined, as well as the individuals who will conduct the analysis and what expertise each is expected to contribute.
Information-gathering and planning are critical elements of successful FMEA. Adequate preparation reduces the time and effort required to conduct a thorough FMEA, thereby reducing lifecycle costs, as discussed in Vol. I. Anything worth doing is worth doing well. In an appropriate context, conducting an FMEA is worth doing; plan accordingly.
Failure Modes and Effects Analysis (FMEA) is most commonly used in product design and manufacturing contexts. However, it can also be helpful in other applications, such as administrative functions and service delivery. Each application context may require refinement of definitions and rating scales to provide maximum clarity, but the fundamentals remain the same.
Several standards have been published defining the structure and content of Failure Modes and Effects Analyses (FMEAs). Within these standards, there are often alternate formats presented for portions of the FMEA form; these may also change with subsequent revisions of each standard.
Add to this variety the diversity of industry and customer-specific requirements. Those unbeholden to an industry-specific standard are free to adapt features of several to create a unique form for their own purposes. The freedom to customize results in a virtually limitless number of potential variants.
Few potential FMEA variants are likely to have broad appeal, even among those unrestricted by customer requirements. This series aims to highlight the most practical formats available, encouraging a level of consistency among practitioners that maintains Failure Modes and Effects Analysis as a portable skill. Total conformity is not the goal; presenting perceived best practices is.
Thus far, the “Making Decisions” series has presented tools and processes used primarily for prioritization or single selection decisions. Decision trees, in contrast, can be used to aid strategy decisions by mapping a series of possible events and outcomes.
Its graphical format allows a decision tree to present a substantial amount of information, while the logical progression of strategy decisions remains clear and easy to follow. The use of probabilities and monetary values of outcomes provides for a straightforward comparison of strategies.
A Pugh Matrix is a visual aid created during a decision-making process. It presents, in summary form, a comparison of alternatives with respect to critical evaluation criteria. As is true of other decision-making tools, a Pugh Matrix will not “make the decision for you.” It will, however, facilitate rapidly narrowing the field of alternatives and focusing attention on the most viable candidates.
A useful way to conceptualize the Pugh Matrix Method is as an intermediate-level tool, positioned between the structured, but open Rational Model (Vol. II) and the thorough Analytic Hierarchy Process (AHP, Vol. III). The Pugh Matrix is more conclusive than the former and less complex than the latter.
Committing resources to project execution is a critical responsibility for any organization or individual. Executing poor-performing projects can be disastrous for sponsors and organizations; financial distress, reputational damage, and sinking morale, among other issues, can result. Likewise, rejecting promising projects can limit an organization’s success by any conceivable measure.
The risks inherent in project selection compels sponsors and managers to follow an objective and methodical process to make decisions. Doing so leads to project selection decisions that are consistent, comparable, and effective. Review and evaluation of these decisions and their outcomes also becomes straightforward.
Choosing effective strategies for waging war against error in manufacturing and service operations requires an understanding of “the enemy.” The types of error to be combatted, the sources of these errors, and the amount of error that will be tolerated are important components of a functional definition (see Vol. I for an introduction).
The traditional view is that the amount of error to be accepted is defined by the specification limits of each characteristic of interest. Exceeding the specified tolerance of any characteristic immediately transforms the process output from “good” to “bad.” This is a very restrictive and misleading point of view. Much greater insight is provided regarding product performance and customer satisfaction by loss functions.
Regardless of the decision-making model used, or how competent and conscientious a decision-maker is, making decisions involves risk. Some risks are associated with the individual or group making the decision. Others relate to the information used to make the decision. Still others are related to the way that this information is employed in the decision-making process.
Often, the realization of some risks increases the probability of realizing others; they are deeply intertwined. Fortunately, awareness of these risks and their interplay is often sufficient to mitigate them. To this end, several decision-making perils and predicaments are discussed below.
There is some disagreement among quality professionals whether or not precontrol is a form of statistical process control (SPC). Like many tools prescribed by the Shainin System, precontrol’s statistical sophistication is disguised by its simplicity. The attitude of many seems to be that if it isn’t difficult or complex, it must not be rigorous.
Despite its simplicity, precontrol provides an effective means of process monitoring with several advantages (compared to control charting), including:
Lesser known than Six Sigma, but no less valuable, the Shainin System is a structured program for problem solving, variation reduction, and quality improvement. While there are similarities between these two systems, some key characteristics lie in stark contrast.
This installment of “The War on Error” introduces the Shainin System, providing background information and a description of its structure. Some common problem-solving tools will also be described. Finally, a discussion of the relationship between the Shainin System and Six Sigma will be presented, allowing readers to evaluate the potential for implementation of each in their organizations.
Despite the ubiquity of corporate Six Sigma programs and the intensity of their promotion, it is not uncommon for graduates to enter industry with little exposure and less understanding of their administration or purpose. Universities that offer Six Sigma instruction often do so as a separate certificate, unintegrated with any degree program. Students are often unaware of the availability or the value of such a certificate.
Upon entering industry, the tutelage of an invested and effective mentor is far from guaranteed. This can curtail entry-level employees’ ability to contribute to company objectives, or even to understand the conversations taking place around them. Without a structured introduction, these employees may struggle to succeed in their new workplace, while responsibility for failure is misplaced.
This installment of “The War on Error” aims to provide an introduction sufficient to facilitate entry into a Six Sigma environment. May it also serve as a refresher for those seeking reentry after a career change or hiatus.
While Vol. IV focused on variable gauge performance, this installment of “The War on Error” presents the study of attribute gauges. Requiring the judgment of human appraisers adds a layer of nuance to attribute assessment. Although we refer to attribute gauges, assessment may be made exclusively by the human senses. Thus, analysis of attribute gauges may be less intuitive or straightforward than that of their variable counterparts.
Conducting attribute gauge studies is similar to variable gauge R&R studies. The key difference is in data collection – rather than a continuum of numeric values, attributes are evaluated with respect to a small number of discrete categories. Categorization can be as simple as pass/fail; it may also involve grading a feature relative to a “stepped” scale. The scale could contain several gradations of color, transparency, or other visual characteristic. It could also be graded according to subjective assessments of fit or other performance characteristic.
If you'd like to contribute to this blog, please email firstname.lastname@example.org with your suggestions.
© JayWink Solutions, LLC