Preparations for Process Failure Modes and Effects Analysis (Process FMEA) (see Vol. II) occur, in large part, while the Design FMEA undergoes revision to develop and assign Recommended Actions. An earlier start, while ostensibly desirable, may result in duplicated effort. As a design evolves, the processes required to support it also evolve; allowing a design to reach a sufficient level of maturity to minimize process redesign is an efficient approach to FMEA.
In this installment of the “FMEA” series, how to conduct a “classical” Process FMEA (PFMEA) is presented as a close parallel to that of DFMEA (Vol. III). Each is prepared as a standalone reference for those engaged in either activity, but reading both is recommended to maintain awareness of the interrelationship of analyses.
The recommended format used to describe the classical PFMEA process is shown in Exhibit 1. It is nearly identical to the DFMEA form; only minor terminology changes differentiate the two. This is to facilitate sharing of information between analysis teams and streamline training efforts for both. The form is shown in its entirety only to provide an overview and a preview of the analysis steps involved. The form has been divided into titled sections for purposes of presentation. These titles and divisions are not part of any industry standard; they are only used here to identify logical groupings of information contained in the FMEA and to aid learning.
Discussion of each section will be accompanied by a close-up image of the relevant portion of the form. The columns are identified by an encircled letter for easy reference to its description in the text. The top of the form contains general information about the analysis; this is where the presentation of the “classical” Process FMEA begins.
FMEA Form Header
1) Check the box that best describes the scope of the PFMEA – does it cover an entire system, a subsystem, or a single component? For example, analysis may be conducted on a Laundry system, a Wash subsystem, or a Spin Cycle component.
2) On the lines below the checkbox, identify the process system, subsystem, or component analyzed. Include information that is unique to the process under scrutiny, including the specific technology used (e.g. resistance welding), product family, etc.
3) Process Responsibility: Identify the lead engineer responsible for developing the process, and assuring and maintaining its performance.
4) Key Date: Date of design freeze (testing complete, documented process approved by customer, etc.).
5) FMEA No.: Provide a unique identifier to be used for FMEA documentation (this form and all accompanying documents and addenda). It is recommended to use a coded system of identification to facilitate organization and retrieval of information. For example, FMEA No. 2022 – J – ITS – C – 6 could be interpreted as follows:
2022 – year of process launch,
W – process family (e.g. welding),
RES – process code (e.g. resistance),
C – Component-level FMEA,
6 – sixth component FMEA for the process “system.”
This is an arbitrary example; organizations should develop their own meaningful identification systems.
6) Page x of y.: Track the entire document to prevent loss of content.
7) FMEA Date: Date of first (“original”) FMEA release.
8) Rev.: Date of release of current revision.
FMEA Form Columns
A – Process Step/Function
List Process Steps by name or by description of purpose (Function). Information recorded on a Process Flow Diagram (PFD) can be input directly in this column. This list may consist of a combination of Process Step names and functional descriptions. For example, a process name that is in wide use, but the purpose of which is not made obvious by the name, should be accompanied by a functional description. This way, those that are not familiar with the process name are not put at a disadvantage when reviewing the FMEA.
B – Requirements
Describe what a Process Step is intended to achieve, or the parameters of the Function (e.g. boil water requires “heat water to 100° C”)
C – Potential Failure Modes
Describe how a Requirement could fail to be met. A Process Step/Function may have multiple Failure Modes; each undesired outcome is defined in technical terms. Opposite, or “mirror-image” conditions (e.g. high/low, long/short, left/right) should always be considered. Conditional process failures should also be included, where applicable, though demand failures (when activated) and standby failures (when not in use) are more likely to delay processing than to effect a process’ performance. Operational failures (when in use) require the greatest effort in analysis.
D – Potential Effects of Failure
Describe the undesired outcome(s) from the customer perspective. Effects of Failure may include physical damage, reduced performance, intermittent function, unsatisfactory aesthetics, or other deviation from reasonable customer expectations. All “customers” must be considered, from internal to end user; packaging, shipping, installation and service crews, and others could be effected.
E – Severity (S)
Rank each Effect of Failure on a predefined scale. Loosely defined, the scale ranges from 1 – insignificant to 10 – resulting in serious bodily injury or death. The suggested Severity evaluation criteria and ranking scale from AIAG’s “FMEA Handbook,” shown in Exhibit 7, provides guidelines for evaluating the impact of failures on both internal and external customers. To evaluate nonautomotive designs, the criteria descriptions can be modified to reflect the characteristics of the product, industry, and application under consideration; an example is shown in Exhibit 8.
F – Classification
Identify high-priority Failure Modes and Causes of Failure – that is, those that require the most rigorous monitoring or strictest controls. These may be defined by customer requirements, empirical data, the lack of technology currently available to improve process performance, or other relevant characteristic. Special characteristics are typically identified by a symbol or abbreviation that may vary from one company to another. Examples of special characteristic identifiers are shown in Exhibit 9. Special characteristics identified on a DFMEA should be carried over to any associated PFMEAs to ensure sufficient controls are implemented.
G – Potential Causes of Failure
Identify the potential causes of the Failure Mode. Like Failure Modes, Causes of Failure must be defined in technical terms, rather than customer perceptions (i.e. state the cause of the Failure Mode, not the Effect). A Failure Mode may have multiple potential Causes; identify and evaluate each of them individually.
Incoming material is assumed to be correct (within specifications); therefore, it is not to be listed as a Cause of Failure. Incorrect material inputs are evidence of failures of prior processes (production, packaging, measurement, etc.), but their effect on the process should not be included in the PFMEA. Also, descriptions must be specific. For example, the phrase “operator error” should never be used, but “incorrect sequence followed” provides information that is useful in improving the process.
H – Prevention Controls
Identify process controls used to prevent the occurrence of each Failure Mode or undesired outcome. Process guidelines from material or equipment suppliers, simulation, designed experiments, statistical process control (SPC), and error-proofing (see “The War on Error – Vol. II”) are examples of Process Prevention Controls.
I – Occurrence (O)
Rank each Failure Mode according to its frequency of occurrence, a measure of the effectiveness of the Process Prevention Controls employed. Occurrence rating tables typically present multiple methods of evaluation, such as qualitative descriptions of frequency and quantitative probabilities. The example in Exhibit 11 is from the automotive industry, while the one in Exhibit 12 is generalized for use in any industry. Note that the scales are significantly different; once a scale is chosen, or developed, that elicits sufficient and appropriate responses to rankings, it must be used consistently. That is, rankings contained in each PFMEA must have the same meaning.
The potential Effects of Failure are evaluated individually via the Severity ranking, but collectively in the Occurrence ranking. This may seem inconsistent, or counterintuitive, at first; however, which of the potential Effects will be produced by a failure cannot be reliably predicted. Therefore, Occurrence of the Failure Mode must be ranked. For a single Failure Mode, capable of producing multiple Effects, ranking Occurrence of each Effect would understate the significance of the underlying Failure Mode and its Causes.
J – Detection Controls – Failure Mode
Identify process controls used to detect each Failure Mode. Examples include various sensors used to monitor process parameters, measurements, post-processing verifications, and error-proofing in subsequent operations.
K – Detection Controls – Cause
Identify process controls used to detect each Cause of Failure. Similar tools and methods are used to detect Causes and Failure Modes.
L – Detection (D)
Rank the effectiveness of all current Process Detection Controls for each Failure Mode. This includes Detection Controls for both the Failure Mode and potential Causes of Failure. The Detection ranking used for RPN calculation is the lowest for the Failure Mode. Including the D ranking in the description for each Detection Control makes identification of the most effective control a simple matter. Detection ranking table examples are shown in Exhibit 13 and Exhibit 14. Again, differences can be significant; select or develop an appropriate scale and apply it consistently.
M – Risk Priority Number (RPN)
The Risk Priority Number is the product of the Severity, Occurrence, and Detection rankings: RPN = S x O x D. The RPN column provides a snapshot summary of the overall risk associated with the design. On its own, however, it does not provide the most effective means to prioritize improvement activities. Typically, the S, O, and D rankings are used, in that order, to prioritize activities. For example, all high Severity rankings (e.g. S ≥ 7) require review and improvement. If a satisfactory process redesign cannot be developed or justified, management approval is required to accept the current process.
Similarly, all high Occurrence rankings (e.g. O ≥ 7) require additional controls to reduce the frequency of failure. Those that cannot be improved require approval of management to allow the process to operate in its existing configuration. Finally, controls with high Detection rankings (e.g. D ≥ 6) require improvement or justification and management approval.
Continuous improvement efforts are prioritized by using a combination of RPN and its component rankings, S, O, and D. Expertise and judgment must be applied to develop Recommended Actions and to determine which ones provide the greatest potential for risk reduction.
Requiring review or improvement according to threshold values is an imperfect practice. It incentivizes less-conscientious evaluators to manipulate rankings, to shirk responsibility for improvement activities, process performance and, ultimately, product quality. This is particularly true of RPN, as small changes in component rankings that are relatively easy to justify result in large changes in RPN. For this reason, review based on threshold values is only one improvement step and RPN is used for summary and comparison purposes only.
N – Recommended Actions
Describe improvements to be made to the process or controls to reduce risk. Several options can be included; implementation will be prioritized according to evaluations of risk reduction. Lowering Severity is the highest priority, followed by reducing frequency of Occurrence, and improving Detection of occurrences.
O – Responsibility
Identify the individual who will be responsible for executing the Recommended Action, providing status reports, etc. More than one individual can be named, but this should be rare; the first individual named is the leader of the effort and is ultimately responsible for execution. Responsibility should never be assigned to a department, ad hoc team, or other amorphous group.
P – Target Completion Date
Assign a due date for Recommended Actions to be completed. Recording dates in a separate column facilitates sorting for purposes of status updates, etc.
Q – Actions Taken
Describe the Actions Taken to improve the process or controls and lower the inherent risk. These may differ from the Recommended Actions; initial ideas are not fully developed and may require adjustment and adaptation to successfully implement. Due to limited space in the form, entering a reference to an external document that details the Actions Taken is acceptable. The document should be identified with the FMEA No. and maintained as an addendum to the PFMEA.
R – Effective Date
Document the date that the Actions Taken were complete, or fully integrated into the process. Recording dates in a separate column facilitates FMEA maintenance, discussed in the next section.
S – Severity (S) – Predicted
Rank each Effect of Failure as it is predicted to effect internal or external customers after the Recommended Actions are fully implemented. Severity rankings rarely change; it is more common for a process change to eliminate a Failure Mode or Effect of Failure. In such case, the PFMEA is revised, excluding the Failure Mode or Effect from further analysis. Alternatively, the entry is retained, with S lowered to 1, to maintain a historical record of process development.
T – Occurrence (O) – Predicted
Estimate the frequency of each Failure Mode’s Occurrence after Recommended Actions are fully implemented. Depending on the nature of the Action Taken, O may or may not change.
U – Detection (D) – Predicted
Rank the predicted effectiveness of all Process Detection Controls after Recommended Actions are fully implemented. Depending on the nature of the Action Taken, D may or may not change.
* Predictions of S, O, and D must be derived from the same scales as the original rankings.
V – Risk Priority Number (RPN) – Predicted
Calculate the predicted RPN after Recommended Actions are fully implemented. This number can be used to evaluate the relative effectiveness of Recommended Actions.
Review and Maintenance of the PFMEA
Review of the PFMEA is required throughout the lifespan of the process. As Recommended Actions are implemented, the results must be evaluated and compared to predictions and expectations.
Performance of each Process Step or Function effected by Actions Taken must be reevaluated and new S, O, and D rankings assigned as needed. Simply assuming that the predicted values have been achieved is not sufficient; the effectiveness of the Actions Taken must be confirmed.
Once the effectiveness of Actions Taken has been evaluated, information from the Action Results section can be transferred to the body of the PFMEA. Descriptions in the relevant columns are updated to reflect process development activity; new rankings replace the old. New Recommended Actions can also be added to further improve the process’ risk profile. The next round of improvements are prioritized according to the updated rankings and Recommended Actions.
Maintenance of the PFMEA is the practice of systematically reviewing, updating, and reprioritizing activities as described above. The PFMEA will undergo many rapid maintenance cycles prior to product launch. Once the process is approved for continuous operation, PFMEA maintenance activity tends to decline sharply; it should not, however, cease, as long as the process remains in operation. Periodic reviews should be used to evaluate the potential of new technologies, incorporate process and field data collected, and apply any knowledge gained since the process was approved. Any process change must also initiate a review of the PFMEA sections pertaining to Process Steps or Functions effected by the revision.
The PFMEA and all related documentation – diagrams and other addenda – remain valid and, therefore, require maintenance for the life of the process. It serves as input to subsequent analyses and Quality documentation. The PFMEA is an important component in the documentation chain of a successful product. The investment made in a thoughtful analysis is returned manyfold during the lifecycle of the product, its derivatives, and any similar or related products.
For additional guidance or assistance with Operations challenges, feel free to leave a comment, contact JayWink Solutions, or schedule an appointment.
For a directory of “FMEA” volumes on “The Third Degree,” see Vol. I: Introduction to Failure Modes and Effects Analysis.
[Link] “Potential Failure Mode and Effects Analysis,” 4ed. Automotive Industry Action Group, 2008.
[Link] Creating Quality. William J. Kolarik; McGraw-Hill, Inc., 1995.
[Link] The Six Sigma Memory Jogger II. Michael Brassard, Lynda Finn, Dana Ginn, Diane Ritter; GOAL/QPC, 2002.
[Link] “FMEA Handbook Version 4.2.” Ford Motor Company, 2011.
Jody W. Phelps, MSc, PMP®, MBA
JayWink Solutions, LLC
If you'd like to contribute to this blog, please email firstname.lastname@example.org with your suggestions.
© JayWink Solutions, LLC