JayWink Solutions
  • Home
  • How We Work
  • Services
  • Blog
  • Racing
  • Contact
Book an appointment with JayWink Solutions, LLC using Setmore
Picture

Making Decisions – Vol. VIII:  The Pugh Matrix Method

2/9/2022

0 Comments

 
     A Pugh Matrix is a visual aid created during a decision-making process. It presents, in summary form, a comparison of alternatives with respect to critical evaluation criteria.  As is true of other decision-making tools, a Pugh Matrix will not “make the decision for you.”  It will, however, facilitate rapidly narrowing the field of alternatives and focusing attention on the most viable candidates.
     A useful way to conceptualize the Pugh Matrix Method is as an intermediate-level tool, positioned between the structured, but open Rational Model (Vol. II) and the thorough Analytic Hierarchy Process (AHP, Vol. III).  The Pugh Matrix is more conclusive than the former and less complex than the latter.
     Typical practice of “The Third Degree” is to present a tool or topic as a composite, building upon the most useful elements of various sources and variants.  The Pugh Matrix Method is a profound example of this practice; even the title is a composite!  Information about this tool can be found in the literature under various headings, including Pugh Matrix, Decision Matrix, Pugh Method, Pugh Controlled Convergence Method, and Concept Selection.
     This technique was developed in the 1980s by Stuart Pugh, Professor of Design at Strathclyde University in Glasgow, Scotland.  Although originally conceived as a design concept selection technique, the tool is applicable to a much broader range of decision contexts.  It is, therefore, presented here in more-generic terms than in Pugh’s original construction.
 
The Pugh Matrix Method
     A schematic diagram of the Pugh Matrix is presented in Exhibit 1.  To facilitate learning to use the matrix, column and row numbers are referenced [i.e. col(s). yy, row(s) xx] when specific areas are introduced or discussed.  Column and row numbers may differ in practice, as the matrix is expanded or modified to suit a specific decision.  Instructions and guidelines for completing the Pugh Matrix, based on the schematic of Exhibit 1, are given below.
Picture
     The first step of any decision-making process is to clearly define the decision to be made.  A description of the situation faced – problem, opportunity, or objective – clarifies the purpose of the exercise for all involved.  This description should include as much detailed information as is necessary for participants to effectively evaluate alternatives.  A concise statement that encapsulates this information should also be crafted to be used as a title for summary documentation.  Enter this title in the Decision box above the matrix.
     Below the matrix, in the Notes boxes, collect information as the matrix is constructed that is needed to “follow” the analysis and understand the decision rationale.  Space is limited to encourage the use of shorthand notes only; some typical contents are suggested in the schematic (Exhibit 1).  Detailed discussions and explanations, including the long-form decision description, should be documented separately.  The Pugh Matrix is typically a component of an analysis report where these details are also recorded.  Detailed information on the alternatives evaluated should also be included.
 
     A well-defined decision informs the development of a list of relevant evaluation criteria or specifications.  Each criterion or specification should be framed such that a higher rating is preferred to a lower one.  Ambiguous phrasing can lead to misguided conclusions.  For example, evaluators may interpret the criteria “cost” differently.  Some may rate higher cost with a higher score, though the reverse is intended.  A criterion of “low cost” or “cost effectiveness,” while referring to the same attribute and data, may communicate the intent more clearly.  Compile the list of criteria in the Evaluation section of the matrix (col. 1, rows 4 - 8).
     Next, the criteria are weighted, or prioritized, in the Criteria Weight column (col. 2, rows 4 - 8).  A variety of weighting scales can be used; thus it is critical that the scale in use be clearly defined prior to beginning the process.  A 1 – 10 scale offers simplicity and familiarity and is, therefore, an attractive option.  And 1 – 10 scale exemplar is provided in Exhibit 2.  The universal scale factor descriptions may be employed as presented or modified to suit a particular decision environment or organization.  Specify the scale in use in the Criteria Weight box (col. 2, row 2).
Picture
     With the relevant criteria in mind, it is time to brainstorm alternatives.  Before entering the alternatives in the matrix, the list should be screened as discussed in Project Selection – Process, Criteria, and Other Factors.  The matrix is simplified when alternatives that can quickly be determined to be infeasible are eliminated from consideration.  Enter the reduced list in the Alternatives section of the matrix (cols. 3 - 6, row 3).  The first alternative listed (col. 3, row 3) is the baseline or “datum”, typically chosen to satisfy one of the following conditions:
  • The incumbent (i.e. current, existing) product, process, etc.
  • A competitive product, technology, etc.
  • The most fully-defined or most familiar alternative, if no incumbent or other “obvious” choice exists.
  • Random choice, if no incumbent exists and alternatives cannot otherwise be sufficiently differentiated prior to analysis.
     Above each alternative identification, an additional cell is available for a “shortcut” reference.  This shortcut reference can take many forms, including:
  • A sketch representing a simple design concept (e.g. sharp corner, round, chamfer).
  • A single word or short phrase identifying the technology used (e.g. laser, waterjet, EDM, plasma).
  • A symbol representing a key element of the alternative (e.g. required PPE or recycling symbols for each type of material waste generated).
  • Any other reference that is relevant, simple, and facilitates analysis.
     If appropriate shortcut references are available, add them to the matrix (cols. 3 - 6, row 2) to accompany the formal alternative identifications.  Do not add them if they are not helpful reminders of the details of the alternatives, are ambiguous, misleading, or distracting, or otherwise fail to add value to the matrix.  They are not required in the matrix and should only be added if they facilitate the evaluation process.
 
     The next step in the process is to select an evaluation scale; 3-, 5-, and 7-point scales are common, though others could be used.  The scale is centered at zero, with an equal number of possible scores above and below.  The 3-point scale will have one possible score above (+1) and one possible score below (-1) zero.  The 5-point scale will have two possible scores on each side of zero (+2, +1, 0, -1, -2) and so on.  Larger scales can also be constructed; however, the smallest scale that provides sufficient discrimination between alternatives is recommended.  A positive score indicates a preference to the baseline and a negative score indicates that an alternative is less preferred to the baseline, or disfavored, with respect to that criterion.  Larger numbers (when available in the scale selected) indicate the magnitude of the preference.  Specify the scale in use in the Evaluation Scale box (col. 2, row 2).
     Complete the Evaluation section of the matrix by rating each alternative.  Conduct pairwise comparisons of each alternative and the baseline for each criterion listed, entering each score in the corresponding cell (cols. 4 - 6, rows 4 - 8).  By definition, the baseline alternative scores a zero for each criterion (col. 3, rows 4 - 8).
 
     Once all of the alternatives have been scored on all criteria, the Summary section of the matrix can be completed.  The expanded Summary section presented in Exhibit 1 allows detailed analysis and comparisons of performance while remaining easy to complete, requiring only simple arithmetic.
     To populate the upper subsection of the Summary, tally the number of criteria for which each alternative is equivalent to (score = 0), preferred to (score > 0), and less preferred than (score < 0) the baseline.  Enter these numbers in the # Criteria = 0 (cols. 4 - 6, row 9), # Criteria > 0 (cols. 4 - 6, row 10), and # Criteria < 0 (cols. 4 - 6, row 11) cells, respectively.  The baseline’s tallies will equal the number of criteria, zero, and zero, respectively.
     The lower subsection of the Summary displays the weighted positive (preferred) and negative (disfavored) scores for each alternative.  For each criterion for which an alternative is preferred, multiply its criteria weight by the evaluation score; sum the products in the Weighted > 0 cell for each alternative (cols. 4 - 6, row 12).  Likewise, sum the products of the weights and scores of all criteria for which an alternative is disfavored and enter the negative sum in the Weighted < 0 cell for that alternative (cols. 4 - 6, row 13).  Again, the baseline receives scores of zero (col. 3, rows 12 - 13).
 
     The final portion of the matrix to be populated is the Rank section, which includes the Total Score (cols. 3 - 6, row 14) and Gross Rank (cols. 3 - 6, row 15) of each alternative.  The Total Score is calculated by summing the products of weights and scores for all criteria or simply summing the positive and negative weighted scores.  Again, the definition of baseline scoring requires it to receive a zero score.  Other alternatives may earn positive, negative, or zero scores.  A positive Total Score implies the alternative is “better than,” or preferred to, the baseline, while a negative score implies it is “worse than” the baseline, or disfavored.  A Total Score of zero implies that the alternative is equivalent to the baseline, or exhibits similar overall performance.  Any two alternatives with equal scores are considered equivalent and decision-makers should be indifferent to the two options until a differentiating factor is identified.
     Finally, each alternative is assigned a Gross Rank.  The alternative with the highest Total Score is assigned a “1,” the next highest, a “2,” and so on.  Alternatives with equal Total Scores will be assigned equal rank to signify indifference.  The next rank number is skipped; the lowest rank equals the number of alternatives.  The exception occurs where there is a tie at the lowest score, in which case the lowest rank is equal to the highest ranking available to the equivalent alternatives.  The following examples illustrate the method of ranking with equal scores:
  • Of six alternatives, three are scored equally at second rank.  The Gross Rank of these six alternatives is, therefore, [1–2–2–2–5–6].
  • Of six alternatives, the two lowest-scoring alternatives are equivalent.  The Gross Rank of these six alternatives is, therefore, [1–2–3–4–5–5].
The hierarchy of alternative preferences is called Gross Rank because further analysis may lead decision-makers to modify the rank order of alternatives or “break ties.”  This will be discussed in more detail later.
 
     Review the Summary section of the matrix to validate the ranking of alternatives and to “break ties.”  The number of preferred and disfavored characteristics can influence the priority given to equally-scored alternatives.  Also, the magnitude of the weighted positive and negative scores may sufficiently differentiate two alternatives with equal Total Scores and, therefore, equal Gross Rank.  Alternatives with weighted scores of [+5, -3] and [+3, -1] each receive a Total Score of +2.  However, further consideration may lead decision-makers to conclude that the additional benefits represented by the higher positive score of the first alternative do not justify accepting the greater detriment of its higher negative score.  Thus, the second alternative would be ranked higher than the first in the final decision, despite its lower positive score.  [See “Step 5” of The Rational Model (Vol. II) for a related discussion.]
     It is also possible for all alternatives considered to rank below the baseline (i.e. negative Total Score).  That is, the baseline achieves a rank of “1.”  If the baseline alternative in this scenario is the status quo, the do nothing option is prescribed.  If the do nothing option is rejected, a new matrix is needed; this is discussed further in “Iterative Construction,” below.
 
     Conducting sensitivity analysis can increase confidence in the rankings or reveal the need for adjustments.  This can be done in similar fashion to that described for AHP (Vol. III).  Common alterations include equal criteria weights (i.e. = 1, or no weighting) and cost-focus (weighting cost significantly more than other criteria).  If there was difficulty reaching consensus on criteria weights, the influence of the contentious criteria can be evaluated by recalculating scores for the range of weights initially proposed.  Likewise, the impact of zero-score evaluations, resulting from disagreement about an alternative’s merits relative to the baseline, can be similarly explored.  No definitive instruction can be provided to assess the results of sensitivity analysis; each decision, matrix, and environment is unique, requiring decision-makers to apply their judgment and insight to reach appropriate conclusions.
 
Iterative Construction
     A completed Pugh Matrix provides decision-makers with an initial assessment of alternatives.  However, the matrix may be inconclusive, incomplete, or otherwise unsatisfactory.  As mentioned above, there may be decisions to be made within the matrix.  The necessity of minor adjustments may be easily communicated without further computations, while others may warrant constructing a new matrix with modifications incorporated.  Thus, a thorough analysis may require an iterative process of matrix development.
     There are many changes that can be made for subsequent iterations of matrix construction.  Everything learned from previous iterations should be incorporated in the next, to the extent possible without introducing bias to the analysis.  A number of possible modifications and potential justifications are presented below.
  • Refine the decision description.  If the evaluation process revealed any ambiguity in the definition of the decision to be made, clarify or restate it.  The purpose of the matrix must be clear for it to be effective; a “narrowed” view may be needed to achieve this.
  • Refine criteria definitions.  If the framing of any evaluation criterion has caused difficulty, such as varying interpretation by multiple evaluators, adjust its definition.  Consistent interpretation is required for meaningful evaluation of alternatives.
  • Add or remove criteria.  Discussion and evaluation of alternatives may reveal relevant criteria that had been previously overlooked; add them to the list.  Criteria that have failed to demonstrate discriminatory power can be removed.  This may occur when one criterion is strongly correlated with another; the alternatives may also be legitimately indistinguishable in a particular dimension.  Criteria should not be eliminated hastily, however, as results of another iteration may lead to a different conclusion.  Therefore, a criterion should be retained for at least two iterations and removed only if the results continue to support doing so.  New alternatives should also be evaluated per any eliminated criterion to validate its continued exclusion from the matrix.
  • Modify the Criteria Weighting Scale.  Though included for completeness, modifying the criteria weighting scale is, arguably, the least helpful adjustment that can be made.  It is difficult to conceive of one more advantageous than the “standard” 1 – 10 scale; it is recommended as the default.
  • Review Criteria Weights.  If different conclusions are reached at different steps of the analysis and review, criteria weighting may require adjustment.  That is, if Evaluation Tallies, Total Scores, and sensitivity analysis indicate significantly different preferences, the Criteria Weights assigned may not accurately reflect the true priorities.  Criteria Weights should be revised only with extreme caution, however; bias could easily be introduced, supporting a predetermined, but suboptimal, conclusion.
  • Add or remove alternatives.  Discussion and evaluation may lead to the discovery of additional alternatives or reveal opportunities to combine favorable characteristics of existing alternatives to create hybrid solutions.  Add any new alternatives developed to the matrix for evaluation.  Alternatives that are comprehensively dominated in multiple iterations are candidates for elimination.
  • Select a different baseline.  Comparing alternatives to a different baseline may interrupt biased evaluations, improving the accuracy of assessments and rankings.  Varied perspectives can also clarify advantages of alternatives, facilitating final ranking decisions.
  • Modify “shortcut” references.  If shortcut references do not provide clarity that facilitates evaluation of alternatives, modify or remove them.  It is better for them to be absent than to be confusing.
  • Refine the Evaluation Scale.  Implementing an evaluation scale with a wider range (replacing a 3-point scale with a 5- or 7-point scale, for example) improves the ability to differentiate the performance of alternatives.  Increased discriminatory power allows the capture of greater nuance in the evaluations, reducing the number of equivalent or indifferent ratings and creating a discernible hierarchy of alternatives.
     Additional research may also be needed between iterations.  Decision-making often relies heavily on estimates of costs, benefits, and performance as assessed by a number of metrics.  Improving these estimates may be necessary for meaningful comparison of alternatives and a conclusive analysis.
 
Pugh Matrix Example
     To demonstrate practical application of the Pugh Matrix Method, we revisit the hypothetical machine purchase decision example of Vol. III.  The AHP example presented was predicated on a prior decision (unstated, assumed) to purchase a new machine; only which machine to purchase was left to decide.  The detailed decision definition, or objective, was “Choose source for purchase of new production machine for Widget Line #2.”  For the Pugh Matrix example, the premise is modified slightly; no prior decision is assumed.  The decisions to purchase and which machine to purchase are combined in a single process (this could also be done in AHP).  To so formulate the decision, it is defined as “Determine future configuration of Widget Line #2.”
     Evaluation criteria are the same as in the AHP example, weighted on a 1 – 10 scale.  Criteria weights are chosen to be comparable to the previous example to the extent possible.  The evaluation criteria and corresponding weights are presented in Exhibit 3.
Picture
     The alternatives considered are also those from the AHP example, with one addition:  maintaining the existing widget machine.  The cost and performance expectations of each alternative are presented in Exhibit 4.  Maintaining the existing equipment is the logical choice for baseline; its performance and other characteristics are most familiar.  Consequently, estimates of its future performance are also likely to be most accurate.
Picture
     “Shortcut” reference images are inserted above the alternative (source company) names.  The performance summary is sufficiently brief that it could have been used instead to keep key details in front of evaluators at all times.  For example, the shortcut reference for the baseline could be [$0.8M_50pcs/hr_6yrs].
     To evaluate alternatives, an “intermediate” scale is chosen.  Use of a 5-point scale is demonstrated to provide balance between discrimination and simplicity.  The scoring regime in use is presented in Exhibit 5.
Picture
     Each alternative is evaluated on each criterion and the scores entered in the Evaluation section of the example matrix, presented in Exhibit 6.  Alternative evaluations mirror the assessments in the AHP example of Vol. III to the extent possible.  There should be no surprises in the evaluations; each alternative is assessed a negative score on Cost and positive scores on Productivity and Service Life.  This outcome was easily foreseeable from the performance summary of Exhibit 4.  After populating the Evaluation section, the remainder of the matrix is completed with simple arithmetic, as previously described.
Picture
     A cursory review of the Summary section reveals interesting details that support the use of this formulation of the Pugh Matrix Method to make this type of decision.  First, the three new machine alternatives are equal on each of the score tallies.  Without weighting, there would be nothing to differentiate them.  Second, the Jones and Wiley’s machines have equal negative weighted scores.  This could be of concern to decision-makers, particularly if no clear hierarchy of preferences is demonstrated in the Gross Rank.  Were this to be the case, repeating the evaluations with a refined scale (i.e. 7-point) may be in order.
     Finally, the Pugh Matrix Method demonstrated the same hierarchy of preferences (Gross Rank) as did AHP, but reached this conclusion with a much simpler process.  This is by no means guaranteed, however; the example is purposefully simplistic.  As the complexity of the decision environment increases, the additional sophistication of AHP, or other tools, becomes increasingly advantageous.
     Sensitivity analysis reveals that the “judgment call” made to score Wiley’s Productivity resulted in rankings that match the result of the AHP example.  Had it been scored +2 instead of +1, the ranks of Wiley’s and Acme would have reversed.  Again, a refined evaluation scale may be warranted to increase confidence in the final decision.
 
Variations on the Theme
     As mentioned in the introduction, the Pugh Matrix Method is presented as a composite of various constructions; every element of the structure presented here may not be found in another single source.  Also, additional elements that could be incorporated in a matrix can be found in the literature.  Several possible variations are discussed below.
     In its simplest form, the Summary section of the Pugh Matrix contains only the Evaluation Tallies, as weights are not assigned to criteria.  Evaluation of alternatives is conducted on a +/s/- 3-point “scale,” where an alternative rated “+” is preferred to baseline, “-“ is disfavored relative to baseline, and “s” is the same as, or equivalent to, baseline in that dimension.  Refining the evaluation scale consists of expanding it to ++/+/s/-/-- or even +++/++/+/s/-/--/---. Larger scales inhibit clarity for at-a-glance reviews.  Until the number of criteria and/or alternatives becomes relatively large, the value of this “basic” matrix is quite limited.  A basic Pugh Matrix for the example widget-machine purchasing decision is presented in Exhibit 7.  As mentioned in the example, a coarse evaluation scale and the absence of criteria weighting result in three identically scored alternatives.  The matrix has done little to clarify the decision; it reinforces the decision to buy a new machine, but has not helped determine which machine to purchase.
Picture
     Features not present in the basic matrix were chosen for inclusion in the recommended construction.  Some features included (shown in “Pugh Matrix Example,” above), and their benefits, are:
  • Expanded Summary section
    • Greater insight into relative performance of alternatives.
  • 5-Point Numerical Evaluation Scale
    • At-a-glance clarity.
    • Compatibility with and simplicity in use of spreadsheet for calculations.
    • Balance between discrimination and simplicity.
  • 1 – 10 Criteria Weight Scale
    • Familiarity and simplicity of scale.
    • Greater discriminatory power of matrix.
 
     The Summary section in our matrix could be called the “Alternative Summary” to differentiate it from a “Criteria Summary.”  The purpose of a Criteria Summary, ostensibly, is to evaluate the extent to which each requirement is being satisfied by the available alternatives.  Our example analysis, with Criteria Summary (cols. 9 – 14, rows 4 – 13), is shown in the expanded matrix presented in Exhibit 8.  It is excluded from the recommended construction because of its potential to be more of a distraction than a value-added element of the matrix.  While the Evaluation Tallies may provide an indication of the quality of the alternatives offered, it is unclear how to use the weighted scores or Total Scores to any advantage (i.e. is a score of 40 outstanding, mediocre, or in between?).  If decision-makers do perceive value in a Criteria Summary, it is a simple addition to the matrix; Evaluation Tallies, weighted scores, and Total Scores are analogous to those calculated in the Alternatives Summary.
Picture
     The use of primary and secondary criteria is also optional.  Refer to “Project Selection Criteria” in Project Selection – Process, Criteria, and Other Factors for a related discussion.  In the Project Selection discussion, primary criteria were called “categories of criteria,” while secondary criteria was shortened to, simply, “criteria.”  Though the terminology used is slightly different, either set of terms is acceptable, as the concept and advantages of use are essentially identical.  Organization, summary, and presentation of information can be facilitated by their use.  For example, reporting scores for a few primary criteria may be more appropriate in an Executive Summary than several secondary criteria scores.  However, this method is only advantageous when the number of criteria is large.  Reviewers should also beware the potential abuse of this technique; important details could be masked – intentionally or unintentionally hidden – by an amalgamated score.
 
     An alternate criteria-weighting scheme prescribes the sum of all criteria weights equal unity.  This is practical only for a small number of criteria; larger numbers of criteria require weights with additional significant digits (i.e. decimal places).  The relative weights of numerous criteria quickly become too difficult to monitor for the scores to remain meaningful.  The 1 – 10 scale is easy to understand and consistently apply.
     Nonlinear criteria weighting can significantly increase the discriminatory power of the matrix; however, it comes at a high cost.  Development of scoring curves can be difficult and the resulting calculations are far more complex.  A key advantage of the Pugh Matrix Method – namely, simplicity – is lost when nonlinear scoring is introduced.
 
     The final optional element to be discussed is the presentation of multiple iterations of the Pugh Matrix Method in a single matrix.  An example, displaying three iterations, is presented in Exhibit 9.  Features of note include:
  • Results of multiple iterations are shown side by side by side for each alternative.
  • Choice of baseline (“datum”) for each iteration is clearly identified (dark-shaded columns).
  • Eliminated alternatives are easily identified (light-shaded columns).
  • Eliminated criteria are easily identified (light-shaded rows).
  • Graphical presentation of summary scores is useful for at-a-glance reviews.
This presentation format is useful for a basic matrix (i.e. no criteria weighting, 3-point evaluation scale).  As features are added to the matrix, however, it becomes more difficult and less practical – at some point, infeasible – to present analysis results in this format.
Picture
Final Notes
     Confidence in any tool is developed with time and experience.  The Pugh Matrix Method is less sophisticated than other tools, such as AHP (Vol. III), and, thus, may require a bit more diligence.  For example, the Pugh Matrix lacks the consistency check of AHP.  Therefore, it could be more susceptible to error, misuse, or bias; the offset is its simplicity.  A conscientious decision-making team can easily overcome the matrix’s deficiency and extract value from its use.
     The Pugh Matrix is merely a decision-making aid and like any other, it is limited in power.  The outcome related to any decision is not necessarily an accurate reflection of any decision-making aid used.  It cannot overcome poor criteria choices, inaccurate estimates, inadequate alternatives, or a deficiency of expertise exhibited by evaluators.  “Garbage in, garbage out” remains true in this context.
     It is important to remember that “the matrix does not make the decision;” it merely guides decision-makers.  Ultimately, it is the responsibility of those decision-makers to choose appropriate tools, input accurate information, apply relevant expertise and sound judgment, and validate and “own” any decision made.
 
     For additional guidance or assistance with decision-making or other Operations challenges, feel free to leave a comment, contact JayWink Solutions, or schedule an appointment.
 
     For a directory of “Making Decisions” volumes on “The Third Degree,” see Vol. I:  Introduction and Terminology.
 
References
[Link] “How To Use The Pugh Matrix.”  Decision Making Confidence.
[Link] “What is a Decision Matrix?”  ASQ.
[Link] “Pugh Matrix.”  CIToolkit.
[Link] “The Systems Engineering Tool Box – Pugh Matrix (PM).”  Stuart Burge, 2009.
[Link] “The Pugh Controlled Convergence method: model-based evaluation and implications for design theory.”  Daniel Frey, et al;  Research in Engineering Design, 2009.
[Link] “Decide and Conquer.”  Bill D. Bailey and Jan Lee; Quality Progress, April 2016.
[Link] Farris, J., & Jack, H. (2011, June), Enhanced Concept Selection for Students Paper presented at 2011 ASEE Annual Conference & Exposition, Vancouver, BC. 10.18260/1-2--17895
[Link] Takai, Shun & Ishii, Kosuke. (2004). Modifying Pugh’s Design Concept Evaluation Methods. Proceedings of the ASME Design Engineering Technical Conference. 3. 10.1115/DETC2004-57512.
[Link] Kremer, Gül & Tauhid, Shafin. (2008). Concept selection methods - A literature review from 1980 to 2008. International Journal of Design Engineering. 1. 10.1504/IJDE.2008.023764.
[Link] “Concept Selection.”  Design Institute, Xerox Corporation, September 1, 1987
[Link] The Lean 3P Advantage.  Allan R. Coletta; CRC Press, 2012.

 
Jody W. Phelps, MSc, PMP®, MBA
Principal Consultant
JayWink Solutions, LLC
jody@jaywink.com
0 Comments

Your comment will be posted after it is approved.


Leave a Reply.

    Author

    If you'd like to contribute to this blog, please email jay@jaywink.com with your suggestions.

    Archives

    April 2022
    March 2022
    February 2022
    January 2022
    December 2020
    November 2020
    October 2020
    September 2020
    August 2020
    July 2020
    June 2020
    May 2020
    April 2020
    March 2020
    February 2020
    January 2020
    December 2019
    November 2019
    October 2019
    September 2019
    August 2019
    July 2019
    June 2019
    May 2019
    April 2019
    March 2019
    February 2019
    January 2019
    June 2018
    May 2018
    April 2018
    March 2018
    February 2018
    January 2018

    Categories

    All
    Consulting
    Cost
    Customer Experience
    Maintenance & Repair
    Management & Leadership
    Mentoring & Career Guidance
    Operations
    Productivity
    Product/Service Development
    Project Management
    Quality
    Safety
    Sustainability
    Training & Education
    Uncategorized

    RSS Feed

    Picture
    Picture
       © JayWink Solutions,  LLC

Site powered by Weebly. Managed by SiteGround
  • Home
  • How We Work
  • Services
  • Blog
  • Racing
  • Contact