In the context of Failure Modes and Effects Analysis (FMEA), “classical” refers to the techniques and formats that have been in use for many years, such as those presented in AIAG’s “FMEA Handbook” and other sources. Numerous variations of the document format are available for use. In this discussion, a recommended format is presented; one that facilitates a thorough, organized analysis.
Preparations for FMEA, discussed in Vol. II, are agnostic to the methodology and document format chosen; the inputs cited are applicable to any available. In this installment of the “FMEA” series, how to conduct a “classical” Design FMEA (DFMEA) is presented by explaining each column of the recommended form. Populating the form columns in the proper sequence is only an approximation of analysis, but it is a very useful one for gaining experience with the methodology.
The recommended format used to describe the classical Design FMEA process is shown in Exhibit 1. There is no need to squint; it is only to provide an overview and a preview of the steps involved. The form has been divided into titled sections for purposes of presentation. These titles and divisions are not part of any industry standard; they are only used here to identify logical groupings of information contained in the FMEA and to aid learning.
Discussion of each section will be accompanied by a close-up image of the relevant portion of the form. The columns are identified by an encircled letter for easy reference to its description in the text. The top of the form contains general information about the analysis; this is where the presentation of the “classical” Design FMEA begins.
FMEA Form Header
1) Check the box that best describes the scope of the DFMEA – does it cover an entire system, a subsystem, or a single component?
2) On the lines below the checkbox, identify the system, subsystem, or component analyzed. Include relevant information, such as applicable model, program, product family, etc.
3) Design Responsible: Identify the lead engineer responsible for finalizing and releasing the design.
4) Key Date: Date of design freeze (design is “final”).
5) FMEA No.: Provide a unique identifier to be used for FMEA documentation (this form and all accompanying documents and addenda). It is recommended to use a coded system of identification to facilitate organization and retrieval of information. For example, FMEA No. 2022 – J – ITS – C – 6 could be interpreted as follows:
2022 – year of product launch,
J – product family,
ITS – product code,
C – Component-level FMEA,
6 – sixth component FMEA for the product.
This is an arbitrary example; organizations should develop their own meaningful identification systems.
6) Page x of y.: Track the entire document to prevent loss of content.
7) FMEA Date: Date of first (“original”) FMEA release.
8) Rev.: Date of release of current revision.
FMEA Form Columns
A – Item/Function
List (a) Items for hardware approach or (b) Functions for functional approach to conducting a Design FMEA. A hardware approach can be useful during part-consolidation (i.e. design for manufacture or DFM) efforts; a functional approach facilitates taking a customer perspective. A hardware approach is more common for single-function or “simple” designs with low part count, while a functional approach may be better suited to more complex designs with high part counts or numerous functions. A hybrid approach, or a combination of hardware and functional approaches, is also possible.
B – Requirements
Describe (a) an Item’s contribution to the design (e.g. screw is required to “secure component to frame”) or (b) a Function’s parameters (e.g. control temperature requires “45 ± 5° C”)
C – Potential Failure Modes
Describe (a) the manner in which the Item could fail to meet a requirement or (b) the Function could fail to be properly executed. An Item/Function may have multiple Failure Modes; each undesired outcome is defined in technical terms. Opposite, or “mirror-image” conditions (e.g. high/low, long/short, left/right) should always be considered. Conditional failures should also be included – demand failures (when activated), operational failures (when in use), and standby failures (when not in use) may have special causes that require particular attention.
D – Potential Effects of Failure
Describe the undesired outcome(s) from the customer perspective. Effects of Failure may include physical damage, reduced performance, intermittent function, unsatisfactory aesthetics, or other deviation from reasonable customer expectations. All “customers” must be considered, from internal to end user; packaging, shipping, installation and service crews, and others could be effected.
E – Severity (S)
Rank each Effect of Failure on a predefined scale. Loosely defined, the scale ranges from 1 – insignificant to 10 – resulting in serious bodily injury or death. The suggested Severity evaluation criteria and ranking scale from AIAG’s “FMEA Handbook” is shown in Exhibit 7. To evaluate nonautomotive designs, the criteria descriptions can be modified to reflect the characteristics of the product, industry, and application under consideration; an example is shown in Exhibit 8.
F – Classification
Identify high-priority Failure Modes and Causes of Failure – that is, those that require the most rigorous monitoring or strictest controls. These may be defined by customer requirements, empirical data, the lack of technology currently available to improve design performance, or other relevant characteristic. Special characteristics are typically identified by a symbol or abbreviation that may vary from one company to another. Examples of special characteristic identifiers are shown in Exhibit 9.
G – Potential Causes of Failure
Identify the potential causes of the Failure Mode. Like Failure Modes, Causes of Failure must be defined in technical terms, rather than customer perceptions (i.e. state the cause of the Failure Mode, not the Effect). A Failure Mode may have multiple potential Causes; identify and evaluate each of them individually.
H – Prevention Controls
Identify design controls used to prevent the occurrence of each Failure Mode or undesired outcome. Design standards (e.g. ASME Pressure Vessel Code), simulation, designed experiments, and error-proofing (see “The War on Error – Vol. II”) are examples of Design Prevention Controls.
I – Occurrence (O)
Rank each Failure Mode according to its frequency of occurrence, a measure of the effectiveness of the Design Prevention Controls employed. Occurrence rating tables typically present multiple methods of evaluation, such as qualitative descriptions of frequency and quantitative probabilities. The example in Exhibit 11 is from the automotive industry, while the one in Exhibit 12 is generalized for use in any industry. Note that the scales are significantly different; once a scale is chosen, or developed, that elicits sufficient and appropriate responses to rankings, it must be used consistently. That is, rankings contained in each DFMEA must have the same meaning.
The potential Effects of Failure are evaluated individually via the Severity ranking, but collectively in the Occurrence ranking. This may seem inconsistent, or counterintuitive, at first; however, which of the potential Effects will be produced by a failure cannot be reliably predicted. Therefore, Occurrence of the Failure Mode must be ranked. For a single Failure Mode, capable of producing multiple Effects, ranking Occurrence of each Effect would understate the significance of the underlying Failure Mode and its Causes.
J – Detection Controls – Failure Mode
Identify design controls used to detect each Failure Mode. Examples include design reviews, simulation, mock-up and prototype testing.
K – Detection Controls – Cause
Identify design controls used to detect each Cause of Failure. Examples include validation and reliability testing and accelerated aging.
L – Detection (D)
Rank the effectiveness of all current Design Detection Controls for each Failure Mode. This includes Detection Controls for both the Failure Mode and potential Causes of Failure. The Detection ranking used for RPN calculation is the lowest for the Failure Mode. Including the D ranking in the description for each Detection Control makes identification of the most effective control a simple matter. Detection ranking table examples are shown in Exhibit 13 and Exhibit 14. Again, differences can be significant; select or develop an appropriate scale and apply it consistently.
M – Risk Priority Number (RPN)
The Risk Priority Number is the product of the Severity, Occurrence, and Detection rankings: RPN = S x O x D. The RPN column provides a snapshot summary of the overall risk associated with the design. On its own, however, it does not provide the most effective means to prioritize improvement activities. Typically, the S, O, and D rankings are used, in that order, to prioritize activities. For example, all high Severity rankings (e.g. S ≥ 7) require review and improvement. If an appropriate redesign cannot be developed or justified, management approval is required to accept the current design.
Similarly, all high Occurrence rankings (e.g. O ≥ 7) require additional controls to reduce the frequency of failure. Those that cannot be improved require approval of management to allow the design to enter production. Finally, controls with high Detection rankings (e.g. D ≥ 6) require improvement or justification and management approval.
Continuous improvement efforts are prioritized by using a combination of RPN and its component rankings, S, O, and D. Expertise and judgment must be applied to develop Recommended Actions and to determine which ones provide the greatest potential for risk reduction.
Requiring review or improvement according to threshold values is an imperfect practice. It incentivizes less-conscientious evaluators to manipulate rankings, to shirk responsibility for improvement activities and, ultimately, product performance. This is particularly true of RPN, as small changes in component rankings that are relatively easy to justify result in large changes in RPN. For this reason, review based on threshold values is only one improvement step and RPN is used for summary and comparison purposes only.
N – Recommended Actions
Describe improvements to be made to the design or controls to reduce risk. Several options can be included; implementation will be prioritized according to evaluations of risk reduction. Lowering Severity is the highest priority, followed by reducing frequency of Occurrence, and improving Detection of occurrences.
O – Responsibility
Identify the individual who will be responsible for executing the Recommended Action, providing status reports, etc. More than one individual can be named, but this should be rare; the first individual named is the leader of the effort and is ultimately responsible for execution. Responsibility should never be assigned to a department, ad hoc team, or other amorphous group.
P – Target Completion Date
Assign a due date for Recommended Actions to be completed. Recording dates in a separate column facilitates sorting for purposes of status updates, etc.
Q – Actions Taken
Describe the Actions Taken to improve the design or controls and lower the design’s inherent risk. These may differ from the Recommended Actions; initial ideas are not fully developed and may require adjustment and adaptation to successfully implement. Due to limited space in the form, entering a reference to an external document that details the Actions Taken is acceptable. The document should be identified with the FMEA No. and maintained as an addendum to the DFMEA.
R – Effective Date
Document the date that the Actions Taken were complete, or fully integrated into the design. Recording dates in a separate column facilitates FMEA maintenance, discussed in the next section.
S – Severity (S) – Predicted
Rank each Effect of Failure as it is predicted to effect the customer after the Recommended Actions are fully implemented. Severity rankings rarely change; it is more common for a design change to eliminate a Failure Mode or Effect of Failure. In such case, the DFMEA is revised, excluding the Failure Mode or Effect from further analysis. Alternatively, the entry is retained, with S lowered to 1, to maintain a historical record of development.
T – Occurrence (O) – Predicted
Estimate the frequency of each Failure Mode’s Occurrence after Recommended Actions are fully implemented. Depending on the nature of the Action Taken, O may or may not change.
U – Detection (D) – Predicted
Rank the predicted effectiveness of all Design Detection Controls after Recommended Actions are fully implemented. Depending on the nature of the Action Taken, D may or may not change.
* Predictions of S, O, and D must be derived from the same scales as the original rankings.
V – Risk Priority Number (RPN) – Predicted
Calculate the predicted RPN after Recommended Actions are fully implemented. This number can be used to evaluate the relative effectiveness of Recommended Actions.
Review and Maintenance of the DFMEA
Review of the DFMEA is required throughout the product development process. As Recommended Actions are implemented, the results must be evaluated and compared to predictions and expectations.
Performance of each Item or Function effected by Actions Taken must be reevaluated and new S, O, and D rankings assigned as needed. Simply assuming that the predicted values have been achieved is not sufficient; the effectiveness of the Actions Taken must be confirmed.
Once the effectiveness of Actions Taken has been evaluated, information from the Action Results section can be transferred to the body of the DFMEA. Descriptions in the relevant columns are updated to reflect design modifications and new rankings replace the old. New Recommended Actions can also be added to further improve the design’s risk profile. The next round of improvements are prioritized according to the updated rankings and Recommended Actions.
Maintenance of the DFMEA is the practice of systematically reviewing, updating, and reprioritizing activities as described above. The DFMEA will undergo many rapid maintenance cycles during product development. Once the design is approved for production, DFMEA maintenance activity tends to decline sharply; it should not, however, cease, as long as the product remains viable. Periodic reviews should take place throughout the product lifecycle to evaluate the potential of new technologies, incorporate process and field data collected, and apply any knowledge gained since the product’s introduction. Any design change must also initiate a review of the DFMEA sections pertaining to Items or Functions effected by the revision.
The DFMEA and all related documentation – diagrams and other addenda – remain valid and, therefore, require maintenance for the life of the product. It serves as input to the Process FMEA, other subsequent analyses, Quality documentation, and even marketing materials. The DFMEA is an important component in the documentation chain of a successful product. The investment made in a thoughtful analysis is returned manyfold during the lifecycles of the product, its derivatives, and any similar or related products.
For additional guidance or assistance with Operations challenges, feel free to leave a comment, contact JayWink Solutions, or schedule an appointment.
For a directory of “FMEA” volumes on “The Third Degree,” see Vol. I: Introduction to Failure Modes and Effects Analysis.
[Link] “Potential Failure Mode and Effects Analysis,” 4ed. Automotive Industry Action Group, 2008.
[Link] Product Design. Kevin Otto and Kristin Wood. Prentice Hall, 2001.
[Link] Creating Quality. William J. Kolarik; McGraw-Hill, Inc., 1995.
[Link] The Six Sigma Memory Jogger II. Michael Brassard, Lynda Finn, Dana Ginn, Diane Ritter; GOAL/QPC, 2002.
[Link] “FMEA Handbook Version 4.2.” Ford Motor Company, 2011.
Jody W. Phelps, MSc, PMP®, MBA
JayWink Solutions, LLC
If you'd like to contribute to this blog, please email firstname.lastname@example.org with your suggestions.
© JayWink Solutions, LLC