While the Rational Model provides a straightforward decision-making aid that is easy to understand and implement, it is not well-suited, on its own, to highly complex decisions. A large number of decision criteria may create numerous tradeoff opportunities that are not easily comparable. Likewise, disparate performance expectations of alternatives may make the “best” choice elusive. In these situations, an additional evaluation tool is needed to ensure a rational decision.
The scenario described above requires Multi-criteria Analysis (MCA). One form of MCA is Analytic Hierarchy Process (AHP). In this installment of “Making Decisions,” application of AHP is explained and demonstrated via a common example – a purchasing decision to source a new production machine.
Before embarking on the Analytic Hierarchy Process example, it is important to note that AHP was originally developed with matrix notation; calculations were performed with matrix algebra. To make this decision-making aid more accessible, AHP will be executed here with tables and basic mathematics in lieu of matrix operations. Due, in part, to the simplified mathematics, the process, at first glance, may seem long and tedious. Do not be discouraged! Though the presentation may seem lengthy, the process is easy to follow and simple to implement, particularly when using a spreadsheet to perform the required calculations.
Developing a Hierarchy
The first step of the decision-making process, as always, is to define the decision to be made. In AHP, the decision scenario is represented by a hierarchy. The decision hierarchy for our example, shown in Exhibit 1, consists of three levels. Level 1 is the goal, objective, or purpose of the decision. The goal of our example is summarized as “Buy Widget Machine.” The long form of this objective statement may be “Choose source for purchase of new production machine for Widget Line #2.” If the objective is understood by all involved, the summary statement is sufficient; if there is concern of confusion, use a more detailed statement.
Level 2 is populated with the criteria deemed relevant to the decision. In our example, machines will be evaluated on the dimensions of cost, productivity, and service life. Additional criteria could have been added, as well as additional levels of analysis. For example, the cost dimension could have been split into sub-criteria such as purchase price, maintenance cost, and disposal cost. However, we will forego the additional complexity in our example, as it may be a bit overwhelming in one’s first exposure to AHP.
Level 3 identifies the alternatives to be considered. For our example, let’s assume that RFPs (requests for proposal) have been sent to Jones Machinery, Wiley’s Widget Works, and Acme Apparatus Co. At this stage, only the potential sources are known; it is best if the content of the proposals has not yet been revealed to evaluators. A “blind” process limits the bias affecting the criteria evaluations.
Additional alternatives could also have been included in the analysis. An analysis with the minimum number of levels, with three evaluation criteria and three alternatives (a “3 x 3 decision matrix”) was chosen for balance. The analysis is sufficiently complex to demonstrate the value of AHP, but not so complex as to overwhelm those unfamiliar with it. The process followed here is appropriate with as many as ten criteria and alternatives (10 x 10 matrix). Larger decision matrices require adjustments that are beyond the scope of this discussion.
Once the decision hierarchy has been established, the analysis can begin in earnest. This begins with pairwise comparisons of the evaluation criteria to quantify the relative importance of each. Comparisons are “scored” on Saaty’s Pairwise Comparison Scale, shown in Exhibit 2. Scores are usually odd numbers, ranging from 1 to 9. If greater discrimination or compromise is required, even numbers (from 2 to 8) can be used. Maximum discrimination is achieved using decimal scores, but the additional effort adds value only in the most extraordinary circumstances. Ours is a straightforward example, requiring only odd integer scores.
The pairwise comparisons of evaluation criteria for our example are shown in Exhibit 3. To complete the table, only the three shaded cells in the upper right require entries. The diagonal is always populated with “1.000,” as each criterion compared with itself must be of equal importance. Entries in the lower left section of the table are calculated automatically as the reciprocals of those in the upper right (mirrored across the diagonal), as the order of comparison is reversed. Sum each column of the completed comparison table.
To ensure that the table is populated correctly, follow the guidelines provided in Exhibit 4.
The reasoning behind each of the pairwise comparison “scores” is given in Exhibit 5. Presenting this information is not considered a requisite part of AHP, but it is a useful one. This small addition provides a valuable reference for the inexperienced or anyone reviewing prior decisions in order to improve the quality of future decisions.
Next, we will create another table, the normalized matrix, and calculate a priority value for each criterion. To normalize the matrix, the value in each cell is divided by its column total. The normalized COST/COST cell value is 1.000/15.000 = 0.067; the normalized PRODUCTIVITY/COST value is 9.000/15.000 = 0.600, and so on. The normalized matrix is shown in Exhibit 6. The final column of the table shows the priority of each criterion, calculated by averaging the normalized values in each row. The priority values reflect the relative importance of each criterion. The higher the priority value, the more important the criterion to the decision; the greater the difference between two priority values, the greater the relative importance of the higher-priority criterion. If calculated correctly, the sum of priorities equals unity.
The results obtained thus far are often presented in the format shown in Exhibit 7, referred to here as the standard presentation of results. It is simply a composite of previous tables; the original comparison “scores” from Exhibit 3 are shown with the criteria priorities calculated in Exhibit 6.
To ensure that the analysis and, thus, the decision-making process, is progressing rationally, we check the consistency of the criteria evaluations. In this section, thorough derivations of the calculations will not be presented. In-depth knowledge of the derivations are not required for successful use of AHP and are beyond the scope of this post. More information can be found in the references cited at the end of the post.
In its simplest form, a consistency check can be formulated in the following way: “Verify that a > c when it has been determined that a > b and b > c.” Extending this formulation slightly, to account for the degree of consistency, we may replace “a > c” with “a >> c” in the previous statement.
To “check consistency” is synonymous with “calculate Consistency Ratio (C.R.) of comparison table.” To do this, start with the standard presentation of Exhibit 7. Multiply each “score” by the priority corresponding to the criterion in its column. That is, multiply each cell in the COST column by the COST priority (0.064), multiply each cell in the PRODUCTIVITY column by the PRODUCTIVITY priority (0.669), and multiply each cell in the SERVICE LIFE column by the SERVICE LIFE priority (0.267). The table now consists of weighted columns. Sum each row in the WEIGHTED SUM column, as shown in Exhibit 8.
Our next step is to calculate λmax (“lambda max” – see references for details), the average of the ratios of each criterion’s weighted sum (from Exhibit 8) to its priority (from Exhibit 7). The calculation of λmax for the criteria evaluations is shown in Exhibit 9.
The Consistency Index (C.I.) is now calculated, as shown in Exhibit 10, where N is the number of criteria included in the analysis. Finally, the Consistency Ratio is calculated as the ratio of the Consistency Index to the Random Consistency Index (R.I.) found in Exhibit 11 (see references for details).
An established guideline sets a threshold value of C.R. at 0.100. A set of comparisons with C.R. ≤ 0.100 is deemed consistent and a valid result can be expected. For C.R. > 0.100, decision-makers should consider revisiting the pairwise comparisons, adjusting evaluation “scores” to achieve greater consistency. Evaluating a large number of criteria may warrant accepting a higher C.R. due to the complexity inherent in analyzing many interrelated characteristics simultaneously. If any criteria are found to be dependent on others (see mutual independence in glossary, Vol. I), those criteria should be eliminated (or combined) to simplify the analysis. Also, aggregating evaluations from members of a decision-making group may induce a higher level of inconsistency; accepting a higher C.R. may be necessary to advance the decision-making process.
The consistency ratio calculation for our example is shown in Exhibit 12. The result is color-coded according to the threshold discussed above. Our C.R., at 0.025, is significantly below the threshold value, indicating a consistent, valid analysis. This level of consistency can be expected for relatively simple hierarchies like our example.
With a validated, consistent criteria evaluation, the analysis can now incorporate information about potential alternatives. Responses to our three RFPs are summarized in Exhibit 13, known as the Performance Matrix. Before beginning the comparisons, it should be verified that each alternative meets all minimum performance criteria established. Any that do not should be removed from the analysis for two reasons:
(1) Fewer alternatives require fewer calculations and less time to complete the analysis.
(2) The compensatory nature of AHP may result in misleading rankings. It is possible for excellent performance in several dimensions to earn an alternative a favorable ranking despite its disqualifying performance in one criterion, invalidating the results.
For purposes of our example, we will assume that no such disqualification conditions exist.
The information presented in the performance matrix is used to conduct pairwise comparisons of alternatives following the same process as the criteria evaluations. A set of tables (or matrices), comparable to that created for the criteria comparisons, will be generated for each criterion. The process followed each time is the same as above; therefore, it will be presented here with much less detail.
First, each potential source (alternative) will be compared on the dimension of COST, as shown in Exhibit 14. Like before, only the upper right of the table requires values to be entered. To “score” each comparison, use the scale in Exhibit 2 and guidelines in Exhibit 4, focusing on “preference” instead of “importance.” For example, Jones Machinery’s lower price earns it a moderate preference (3.000) to Wiley’s Widget Works and a very strong preference (7.000) to Acme Apparatus Co. Wiley’s intermediate price earns it a strong preference (5.000) to Acme.
The pairwise comparisons are normalized and priorities calculated as shown in Exhibit 15. The standard presentation of results is provided in Exhibit 16.
Consistency of the evaluations must be checked, following the same process detailed for the criteria evaluations. The consistency check for the cost comparisons is shown in Exhibit 17.
Repeating the process for the two remaining criteria yields a similar set of tables and calculations. Comparisons of alternatives and calculation of PRODUCTIVITY priorities are shown in Exhibit 18. The consistency check for PRODUCTIVITY comparisons is shown in Exhibit 19.
Comparisons and calculations with respect to SERVICE LIFE are shown in Exhibit 20 and the corresponding consistency check in Exhibit 21. Note that the simplicity of the SERVICE LIFE comparisons resulted in a C.R. of 0.000, or “perfect” consistency.
Synthesizing the Model
The work of AHP culminates in model synthesis, where the question “which machine should we buy?” is finally answered. To calculate each alternative’s overall priority, we begin with the Local Priorities Table of Exhibit 22. This table simply compiles the previously-calculated priorities of each alternative with respect to each criterion (Exhibits 16, 18, 20).
Multiply each cell by the priority of its corresponding criterion, shown in Exhibit 7. These priority values are also called criteria weights, as referenced in Exhibit 23 with the results of this step. Adding the values in each alternative’s row gives its OVERALL PRIORITY. If all calculations have been performed correctly, the sum of the overall priorities will equal unity.
The alternative with the highest overall priority is the preferred option. In our example, the higher productivity and longer service life offered by the Acme machine more than offset its higher price. At the opposite end of the spectrum, Jones’ low price does not compensate for its deficiencies in productivity and service life. Thus, despite its cost advantage, Jones receives the lowest preference ranking.
In the “Checking Consistency” section above, we acknowledge and accept that subjective evaluations may preclude a “perfect” analysis. We simply conduct a test to ensure that the imperfections are acceptable. Similar logic spurs us to conduct a sensitivity analysis of the final results. Inconsistency or compromises in evaluations, such as those reached in a group decision-making process, could alter the outcome of an analysis.
For our example, we consider four scenarios. The first scenario is the original analysis results, repeated for side-by-side comparison. The second – equally-weighted criteria – is a commonly used scenario; it is shown in the template as a “standard test.” Each scenario is accompanied by notes describing its significance – changes in preference rankings or other insights. The first two scenarios are shown in Exhibit 24. Equally-weighted criteria cause no change in preference rankings. Trial-and-error calculations reveal that reducing the PRODUCTIVITY weight as low as 0.300, with the others weighted equally, has no impact on the preference rankings.
The final two scenarios considered for our example are “wild cards.” Any combination of criteria weights (that sum to unity) that seem plausible can be evaluated. Typical areas of consideration include the impacts of changes in operating philosophy (e.g. cost focus vs. performance optimization) and the influence on the final decision of compromises made throughout the process. The third scenario in our example posits a cost focus; COST is given twice the weight of each of the other criteria. In this situation, the preference rankings are reversed from the analysis. However, the degree of preference is very small compared to the original decision.
The fourth scenario considered maintains the productivity focus of the original evaluations, but reverses the weights of the remaining two criteria. No change in preference rankings occur due to the relative “power” of the PRODUCTIVITY weight. The final two scenarios are shown in Exhibit 25.
Any number of scenarios can be explored in a sensitivity analysis. The number of scenarios and the weighting combinations used will be directed by the decision environment. The less homogeneity there is in group evaluations, for example, the more scenarios that may warrant review. The ability to easily iterate scenarios, or conduct trial-and-error explorations for significant scenarios, is the primary advantage of using a spreadsheet; automated calculations drastically reduce the time required to perform AHP.
Considering All Things
All tools come with advantages and disadvantages. Only practical concerns will be mentioned here, leaving the more theoretical and abstract discussions to the academics (see references for more information).
AHP is a useful tool for “every-day” decision-makers. It is quite accessible in the non-matrix form presented here, though simplifying the mathematics relegates its status to approximation. However, approximation with greatly reduced effort is something to be applauded, especially in the case of AHP. It seems that only hardcore mathematicians find the slight reduction in accuracy disturbing; the average practitioner need not be concerned with it.
A characteristic of AHP that may be of concern to practitioners is the potential for rank reversal. Adding an alternative to the analysis may cause the preference rankings of two other alternatives to be reversed in the expanded analysis. If the reversed preferences are of sufficiently low rank, or otherwise do not affect the final decision, it is probably safe to ignore the reversal. Sensitivity analysis can be used to gain further insight and make this determination.
If the reversal has a significant impact on the final decision, and sensitivity analysis does not provide sufficient insight to address the situation, AHP should be supplemented with – or replaced by – other decision-making techniques. If it can be resolved with sound judgment and reasoning, do so. After all, AHP is only an aid; judgment and reasoning should always be applied, no matter the output of the model.
If more drastic measures are needed to finalize a decision, there are many other decision-making tools and models to consider. Future installments of “Making Decisions” will present some of them.
If you have questions or feedback, feel free to post in the comments section below or contact JayWink directly.
For a directory of “Making Decisions” volumes on “The Third Degree,” see “Vol. I: Introduction and Terminology.”
[Link] “Multi-criteria analysis: a manual.” Department for Communities and Local Government, London, January 2009.
[Link] “Guidelines for applying multi-criteria analysis to the assessment of criteria and indicators.”
[Link] “Multicriteria Decision Methods: An Attempt to Evaluate and Unify.” Keun Tae Cho; Mathematical and Computer Modelling, May 2003.
[Link] Multi-criteria Analysis in the Renewable Energy Industry. J.R. San Cristobal Mateo, 2012.
[Link] “A Straightforward Explanation of the Mathematical Foundation of the Analytic Hierarchy Process (AHP).” Decision Lens.
[Link] “Analytic Hierarchy Process (AHP) Tutorial.” Kardi Teknomo, 2006.
[Link] “Application of the AHP in project management.” Kaml M. Al-Subhi Al-Harbi, International Journal of Project Management, 2001.
[Link] “Decision making with the analytic hierarchy process.” Thomas L. Saaty, International Journal of Services Sciences, 2008.
[Link] “How to make a decision: The Analytic Hierarchy Process.” Thomas L. Saaty, European Journal of Operational Research, 1990.
[Link] “The Analytic Hierarchy Process – What It Is and How It Is Used.” R.W. Saaty, Mathematical Modelling, December 1987.
[Link] Practical Decision Making using Super Decisions v3. E. Mu and M. Pereyra-Rojas, 2017.
Jody W. Phelps, MSc, PMP®, MBA
JayWink Solutions, LLC
If you'd like to contribute to this blog, please email email@example.com with your suggestions.
© JayWink Solutions, LLC