A Pugh Matrix is a visual aid created during a decision-making process. It presents, in summary form, a comparison of alternatives with respect to critical evaluation criteria. As is true of other decision-making tools, a Pugh Matrix will not “make the decision for you.” It will, however, facilitate rapidly narrowing the field of alternatives and focusing attention on the most viable candidates. A useful way to conceptualize the Pugh Matrix Method is as an intermediate-level tool, positioned between the structured, but open Rational Model (Vol. II) and the thorough Analytic Hierarchy Process (AHP, Vol. III). The Pugh Matrix is more conclusive than the former and less complex than the latter. Typical practice of “The Third Degree” is to present a tool or topic as a composite, building upon the most useful elements of various sources and variants. The Pugh Matrix Method is a profound example of this practice; even the title is a composite! Information about this tool can be found in the literature under various headings, including Pugh Matrix, Decision Matrix, Pugh Method, Pugh Controlled Convergence Method, and Concept Selection. This technique was developed in the 1980s by Stuart Pugh, Professor of Design at Strathclyde University in Glasgow, Scotland. Although originally conceived as a design concept selection technique, the tool is applicable to a much broader range of decision contexts. It is, therefore, presented here in more-generic terms than in Pugh’s original construction. The Pugh Matrix Method A schematic diagram of the Pugh Matrix is presented in Exhibit 1. To facilitate learning to use the matrix, column and row numbers are referenced [i.e. col(s). yy, row(s) xx] when specific areas are introduced or discussed. Column and row numbers may differ in practice, as the matrix is expanded or modified to suit a specific decision. Instructions and guidelines for completing the Pugh Matrix, based on the schematic of Exhibit 1, are given below. The first step of any decision-making process is to clearly define the decision to be made. A description of the situation faced – problem, opportunity, or objective – clarifies the purpose of the exercise for all involved. This description should include as much detailed information as is necessary for participants to effectively evaluate alternatives. A concise statement that encapsulates this information should also be crafted to be used as a title for summary documentation. Enter this title in the Decision box above the matrix. Below the matrix, in the Notes boxes, collect information as the matrix is constructed that is needed to “follow” the analysis and understand the decision rationale. Space is limited to encourage the use of shorthand notes only; some typical contents are suggested in the schematic (Exhibit 1). Detailed discussions and explanations, including the long-form decision description, should be documented separately. The Pugh Matrix is typically a component of an analysis report where these details are also recorded. Detailed information on the alternatives evaluated should also be included. A well-defined decision informs the development of a list of relevant evaluation criteria or specifications. Each criterion or specification should be framed such that a higher rating is preferred to a lower one. Ambiguous phrasing can lead to misguided conclusions. For example, evaluators may interpret the criteria “cost” differently. Some may rate higher cost with a higher score, though the reverse is intended. A criterion of “low cost” or “cost effectiveness,” while referring to the same attribute and data, may communicate the intent more clearly. Compile the list of criteria in the Evaluation section of the matrix (col. 1, rows 4 - 8). Next, the criteria are weighted, or prioritized, in the Criteria Weight column (col. 2, rows 4 - 8). A variety of weighting scales can be used; thus it is critical that the scale in use be clearly defined prior to beginning the process. A 1 – 10 scale offers simplicity and familiarity and is, therefore, an attractive option. And 1 – 10 scale exemplar is provided in Exhibit 2. The universal scale factor descriptions may be employed as presented or modified to suit a particular decision environment or organization. Specify the scale in use in the Criteria Weight box (col. 2, row 2). With the relevant criteria in mind, it is time to brainstorm alternatives. Before entering the alternatives in the matrix, the list should be screened as discussed in Project Selection – Process, Criteria, and Other Factors. The matrix is simplified when alternatives that can quickly be determined to be infeasible are eliminated from consideration. Enter the reduced list in the Alternatives section of the matrix (cols. 3 - 6, row 3). The first alternative listed (col. 3, row 3) is the baseline or “datum”, typically chosen to satisfy one of the following conditions:
The next step in the process is to select an evaluation scale; 3-, 5-, and 7-point scales are common, though others could be used. The scale is centered at zero, with an equal number of possible scores above and below. The 3-point scale will have one possible score above (+1) and one possible score below (-1) zero. The 5-point scale will have two possible scores on each side of zero (+2, +1, 0, -1, -2) and so on. Larger scales can also be constructed; however, the smallest scale that provides sufficient discrimination between alternatives is recommended. A positive score indicates a preference to the baseline and a negative score indicates that an alternative is less preferred to the baseline, or disfavored, with respect to that criterion. Larger numbers (when available in the scale selected) indicate the magnitude of the preference. Specify the scale in use in the Evaluation Scale box (col. 2, row 2). Complete the Evaluation section of the matrix by rating each alternative. Conduct pairwise comparisons of each alternative and the baseline for each criterion listed, entering each score in the corresponding cell (cols. 4 - 6, rows 4 - 8). By definition, the baseline alternative scores a zero for each criterion (col. 3, rows 4 - 8). Once all of the alternatives have been scored on all criteria, the Summary section of the matrix can be completed. The expanded Summary section presented in Exhibit 1 allows detailed analysis and comparisons of performance while remaining easy to complete, requiring only simple arithmetic. To populate the upper subsection of the Summary, tally the number of criteria for which each alternative is equivalent to (score = 0), preferred to (score > 0), and less preferred than (score < 0) the baseline. Enter these numbers in the # Criteria = 0 (cols. 4 - 6, row 9), # Criteria > 0 (cols. 4 - 6, row 10), and # Criteria < 0 (cols. 4 - 6, row 11) cells, respectively. The baseline’s tallies will equal the number of criteria, zero, and zero, respectively. The lower subsection of the Summary displays the weighted positive (preferred) and negative (disfavored) scores for each alternative. For each criterion for which an alternative is preferred, multiply its criteria weight by the evaluation score; sum the products in the Weighted > 0 cell for each alternative (cols. 4 - 6, row 12). Likewise, sum the products of the weights and scores of all criteria for which an alternative is disfavored and enter the negative sum in the Weighted < 0 cell for that alternative (cols. 4 - 6, row 13). Again, the baseline receives scores of zero (col. 3, rows 12 - 13). The final portion of the matrix to be populated is the Rank section, which includes the Total Score (cols. 3 - 6, row 14) and Gross Rank (cols. 3 - 6, row 15) of each alternative. The Total Score is calculated by summing the products of weights and scores for all criteria or simply summing the positive and negative weighted scores. Again, the definition of baseline scoring requires it to receive a zero score. Other alternatives may earn positive, negative, or zero scores. A positive Total Score implies the alternative is “better than,” or preferred to, the baseline, while a negative score implies it is “worse than” the baseline, or disfavored. A Total Score of zero implies that the alternative is equivalent to the baseline, or exhibits similar overall performance. Any two alternatives with equal scores are considered equivalent and decision-makers should be indifferent to the two options until a differentiating factor is identified. Finally, each alternative is assigned a Gross Rank. The alternative with the highest Total Score is assigned a “1,” the next highest, a “2,” and so on. Alternatives with equal Total Scores will be assigned equal rank to signify indifference. The next rank number is skipped; the lowest rank equals the number of alternatives. The exception occurs where there is a tie at the lowest score, in which case the lowest rank is equal to the highest ranking available to the equivalent alternatives. The following examples illustrate the method of ranking with equal scores:
Review the Summary section of the matrix to validate the ranking of alternatives and to “break ties.” The number of preferred and disfavored characteristics can influence the priority given to equally-scored alternatives. Also, the magnitude of the weighted positive and negative scores may sufficiently differentiate two alternatives with equal Total Scores and, therefore, equal Gross Rank. Alternatives with weighted scores of [+5, -3] and [+3, -1] each receive a Total Score of +2. However, further consideration may lead decision-makers to conclude that the additional benefits represented by the higher positive score of the first alternative do not justify accepting the greater detriment of its higher negative score. Thus, the second alternative would be ranked higher than the first in the final decision, despite its lower positive score. [See “Step 5” of The Rational Model (Vol. II) for a related discussion.] It is also possible for all alternatives considered to rank below the baseline (i.e. negative Total Score). That is, the baseline achieves a rank of “1.” If the baseline alternative in this scenario is the status quo, the do nothing option is prescribed. If the do nothing option is rejected, a new matrix is needed; this is discussed further in “Iterative Construction,” below. Conducting sensitivity analysis can increase confidence in the rankings or reveal the need for adjustments. This can be done in similar fashion to that described for AHP (Vol. III). Common alterations include equal criteria weights (i.e. = 1, or no weighting) and cost-focus (weighting cost significantly more than other criteria). If there was difficulty reaching consensus on criteria weights, the influence of the contentious criteria can be evaluated by recalculating scores for the range of weights initially proposed. Likewise, the impact of zero-score evaluations, resulting from disagreement about an alternative’s merits relative to the baseline, can be similarly explored. No definitive instruction can be provided to assess the results of sensitivity analysis; each decision, matrix, and environment is unique, requiring decision-makers to apply their judgment and insight to reach appropriate conclusions. Iterative Construction A completed Pugh Matrix provides decision-makers with an initial assessment of alternatives. However, the matrix may be inconclusive, incomplete, or otherwise unsatisfactory. As mentioned above, there may be decisions to be made within the matrix. The necessity of minor adjustments may be easily communicated without further computations, while others may warrant constructing a new matrix with modifications incorporated. Thus, a thorough analysis may require an iterative process of matrix development. There are many changes that can be made for subsequent iterations of matrix construction. Everything learned from previous iterations should be incorporated in the next, to the extent possible without introducing bias to the analysis. A number of possible modifications and potential justifications are presented below.
Pugh Matrix Example To demonstrate practical application of the Pugh Matrix Method, we revisit the hypothetical machine purchase decision example of Vol. III. The AHP example presented was predicated on a prior decision (unstated, assumed) to purchase a new machine; only which machine to purchase was left to decide. The detailed decision definition, or objective, was “Choose source for purchase of new production machine for Widget Line #2.” For the Pugh Matrix example, the premise is modified slightly; no prior decision is assumed. The decisions to purchase and which machine to purchase are combined in a single process (this could also be done in AHP). To so formulate the decision, it is defined as “Determine future configuration of Widget Line #2.” Evaluation criteria are the same as in the AHP example, weighted on a 1 – 10 scale. Criteria weights are chosen to be comparable to the previous example to the extent possible. The evaluation criteria and corresponding weights are presented in Exhibit 3. The alternatives considered are also those from the AHP example, with one addition: maintaining the existing widget machine. The cost and performance expectations of each alternative are presented in Exhibit 4. Maintaining the existing equipment is the logical choice for baseline; its performance and other characteristics are most familiar. Consequently, estimates of its future performance are also likely to be most accurate. “Shortcut” reference images are inserted above the alternative (source company) names. The performance summary is sufficiently brief that it could have been used instead to keep key details in front of evaluators at all times. For example, the shortcut reference for the baseline could be [$0.8M_50pcs/hr_6yrs]. To evaluate alternatives, an “intermediate” scale is chosen. Use of a 5-point scale is demonstrated to provide balance between discrimination and simplicity. The scoring regime in use is presented in Exhibit 5. Each alternative is evaluated on each criterion and the scores entered in the Evaluation section of the example matrix, presented in Exhibit 6. Alternative evaluations mirror the assessments in the AHP example of Vol. III to the extent possible. There should be no surprises in the evaluations; each alternative is assessed a negative score on Cost and positive scores on Productivity and Service Life. This outcome was easily foreseeable from the performance summary of Exhibit 4. After populating the Evaluation section, the remainder of the matrix is completed with simple arithmetic, as previously described. A cursory review of the Summary section reveals interesting details that support the use of this formulation of the Pugh Matrix Method to make this type of decision. First, the three new machine alternatives are equal on each of the score tallies. Without weighting, there would be nothing to differentiate them. Second, the Jones and Wiley’s machines have equal negative weighted scores. This could be of concern to decision-makers, particularly if no clear hierarchy of preferences is demonstrated in the Gross Rank. Were this to be the case, repeating the evaluations with a refined scale (i.e. 7-point) may be in order. Finally, the Pugh Matrix Method demonstrated the same hierarchy of preferences (Gross Rank) as did AHP, but reached this conclusion with a much simpler process. This is by no means guaranteed, however; the example is purposefully simplistic. As the complexity of the decision environment increases, the additional sophistication of AHP, or other tools, becomes increasingly advantageous. Sensitivity analysis reveals that the “judgment call” made to score Wiley’s Productivity resulted in rankings that match the result of the AHP example. Had it been scored +2 instead of +1, the ranks of Wiley’s and Acme would have reversed. Again, a refined evaluation scale may be warranted to increase confidence in the final decision. Variations on the Theme As mentioned in the introduction, the Pugh Matrix Method is presented as a composite of various constructions; every element of the structure presented here may not be found in another single source. Also, additional elements that could be incorporated in a matrix can be found in the literature. Several possible variations are discussed below. In its simplest form, the Summary section of the Pugh Matrix contains only the Evaluation Tallies, as weights are not assigned to criteria. Evaluation of alternatives is conducted on a +/s/- 3-point “scale,” where an alternative rated “+” is preferred to baseline, “-“ is disfavored relative to baseline, and “s” is the same as, or equivalent to, baseline in that dimension. Refining the evaluation scale consists of expanding it to ++/+/s/-/-- or even +++/++/+/s/-/--/---. Larger scales inhibit clarity for at-a-glance reviews. Until the number of criteria and/or alternatives becomes relatively large, the value of this “basic” matrix is quite limited. A basic Pugh Matrix for the example widget-machine purchasing decision is presented in Exhibit 7. As mentioned in the example, a coarse evaluation scale and the absence of criteria weighting result in three identically scored alternatives. The matrix has done little to clarify the decision; it reinforces the decision to buy a new machine, but has not helped determine which machine to purchase. Features not present in the basic matrix were chosen for inclusion in the recommended construction. Some features included (shown in “Pugh Matrix Example,” above), and their benefits, are:
The Summary section in our matrix could be called the “Alternative Summary” to differentiate it from a “Criteria Summary.” The purpose of a Criteria Summary, ostensibly, is to evaluate the extent to which each requirement is being satisfied by the available alternatives. Our example analysis, with Criteria Summary (cols. 9 – 14, rows 4 – 13), is shown in the expanded matrix presented in Exhibit 8. It is excluded from the recommended construction because of its potential to be more of a distraction than a value-added element of the matrix. While the Evaluation Tallies may provide an indication of the quality of the alternatives offered, it is unclear how to use the weighted scores or Total Scores to any advantage (i.e. is a score of 40 outstanding, mediocre, or in between?). If decision-makers do perceive value in a Criteria Summary, it is a simple addition to the matrix; Evaluation Tallies, weighted scores, and Total Scores are analogous to those calculated in the Alternatives Summary. The use of primary and secondary criteria is also optional. Refer to “Project Selection Criteria” in Project Selection – Process, Criteria, and Other Factors for a related discussion. In the Project Selection discussion, primary criteria were called “categories of criteria,” while secondary criteria was shortened to, simply, “criteria.” Though the terminology used is slightly different, either set of terms is acceptable, as the concept and advantages of use are essentially identical. Organization, summary, and presentation of information can be facilitated by their use. For example, reporting scores for a few primary criteria may be more appropriate in an Executive Summary than several secondary criteria scores. However, this method is only advantageous when the number of criteria is large. Reviewers should also beware the potential abuse of this technique; important details could be masked – intentionally or unintentionally hidden – by an amalgamated score. An alternate criteria-weighting scheme prescribes the sum of all criteria weights equal unity. This is practical only for a small number of criteria; larger numbers of criteria require weights with additional significant digits (i.e. decimal places). The relative weights of numerous criteria quickly become too difficult to monitor for the scores to remain meaningful. The 1 – 10 scale is easy to understand and consistently apply. Nonlinear criteria weighting can significantly increase the discriminatory power of the matrix; however, it comes at a high cost. Development of scoring curves can be difficult and the resulting calculations are far more complex. A key advantage of the Pugh Matrix Method – namely, simplicity – is lost when nonlinear scoring is introduced. The final optional element to be discussed is the presentation of multiple iterations of the Pugh Matrix Method in a single matrix. An example, displaying three iterations, is presented in Exhibit 9. Features of note include:
Final Notes
Confidence in any tool is developed with time and experience. The Pugh Matrix Method is less sophisticated than other tools, such as AHP (Vol. III), and, thus, may require a bit more diligence. For example, the Pugh Matrix lacks the consistency check of AHP. Therefore, it could be more susceptible to error, misuse, or bias; the offset is its simplicity. A conscientious decision-making team can easily overcome the matrix’s deficiency and extract value from its use. The Pugh Matrix is merely a decision-making aid and like any other, it is limited in power. The outcome related to any decision is not necessarily an accurate reflection of any decision-making aid used. It cannot overcome poor criteria choices, inaccurate estimates, inadequate alternatives, or a deficiency of expertise exhibited by evaluators. “Garbage in, garbage out” remains true in this context. It is important to remember that “the matrix does not make the decision;” it merely guides decision-makers. Ultimately, it is the responsibility of those decision-makers to choose appropriate tools, input accurate information, apply relevant expertise and sound judgment, and validate and “own” any decision made. For additional guidance or assistance with decision-making or other Operations challenges, feel free to leave a comment, contact JayWink Solutions, or schedule an appointment. For a directory of “Making Decisions” volumes on “The Third Degree,” see Vol. I: Introduction and Terminology. References [Link] “How To Use The Pugh Matrix.” Decision Making Confidence. [Link] “What is a Decision Matrix?” ASQ. [Link] “Pugh Matrix.” CIToolkit. [Link] “The Systems Engineering Tool Box – Pugh Matrix (PM).” Stuart Burge, 2009. [Link] “The Pugh Controlled Convergence method: model-based evaluation and implications for design theory.” Daniel Frey, et al; Research in Engineering Design, 2009. [Link] “Decide and Conquer.” Bill D. Bailey and Jan Lee; Quality Progress, April 2016. [Link] Farris, J., & Jack, H. (2011, June), Enhanced Concept Selection for Students Paper presented at 2011 ASEE Annual Conference & Exposition, Vancouver, BC. 10.18260/1-2--17895 [Link] Takai, Shun & Ishii, Kosuke. (2004). Modifying Pugh’s Design Concept Evaluation Methods. Proceedings of the ASME Design Engineering Technical Conference. 3. 10.1115/DETC2004-57512. [Link] Kremer, Gül & Tauhid, Shafin. (2008). Concept selection methods - A literature review from 1980 to 2008. International Journal of Design Engineering. 1. 10.1504/IJDE.2008.023764. [Link] “Concept Selection.” Design Institute, Xerox Corporation, September 1, 1987 [Link] The Lean 3P Advantage. Allan R. Coletta; CRC Press, 2012. Jody W. Phelps, MSc, PMP®, MBA Principal Consultant JayWink Solutions, LLC jody@jaywink.com
0 Comments
Your comment will be posted after it is approved.
Leave a Reply. |
AuthorIf you'd like to contribute to this blog, please email jay@jaywink.com with your suggestions. Archives
November 2023
Categories
All
![]() © JayWink Solutions, LLC
|