<![CDATA[JayWink Solutions - Blog]]>Fri, 01 Jan 2021 03:06:56 -0500Weebly<![CDATA[Learning Games]]>Wed, 30 Dec 2020 15:30:00 GMThttp://jaywinksolutions.com/blog/learning-games     Training the workforce is a critical responsibility of an organization’s management.  Constant effort is required to ensure that all members are operating according to the latest information and techniques.  Whether training is developed and delivered by internal resources or third-party trainers, more efficacious techniques are always sought.
     Learning games, as we know them, have existed for decades (perhaps even longer than we realize), but are gaining popularity in the 21st century.  Younger generations’ affinity for technology and games, including role-playing games, makes them particularly receptive to this type of training exercise.  Learning games need not be purely digital, however.  In fact, games that employ physical artifacts have significant advantages of their own.
     Several terms may be encountered when researching learning games.  Most include “game,” “simulation,” or a combination of the two.  Broadly-accepted definitions describe games as activities that include an element of competition, while simulations attempt to reproduce aspects of the real world.  For simplicity and brevity, this post will use learning games as an umbrella term covering all variants, including simulation games, serious games, serious simulation games, and so on.
     Learning games have been developed for a variety of topics, while others could benefit from creative developers’ attention.  This discussion focuses on typical Third Degree topics, such as engineering, operations and supply chain management, business strategy, and project management in post-secondary and professional education.
     Learning games can take many forms.  Simulations of critical processes are made as realistic as possible to convey the gravity of the real-world situation it represents and teach participants how to manage the risk involved in order to make prudent decisions.  Games can also teach certain concepts or proper judgment through fantastical presentations; this approach can be particularly useful when faced with a reluctant, unreceptive audience.
     Learning games can be digital, analog, or hybrid.  Computer games are popular with tech-savvy students that are accustomed to sophisticated programs and high-quality graphics.  Highly-sophisticated versions may even employ virtual reality or augmented reality.  Use of highly accurate digital twins can improve a game’s effectiveness.
     Tabletop games utilize physical space and objects in game play.  The layout of a game need not be limited to the size of a table; an entire room can be mapped as a gameboard, allowing people, as well as objects, ample space to move about.  The physical representation of the learning game can be used to impart lessons more tangibly than displays and numbers can achieve.  Concepts of inventory and transportation waste are examples of this type of application.  Hybrid games employ digital and physical elements, seeking the most effective combination to aid learning and retention.
     The level of interaction participants have with a learning game is another important characteristic to consider.  Observational games present participants with a situation and a fixed data set.  The data can be analyzed in various ways and other queries may be possible.  From these analyses, participants draw conclusions and make decisions that are then critiqued by facilitators.  Observational games can be thought of as enhanced case studies; they are somewhat more interactive, and may employ multimedia technology or other embellishments for a more appealing and engaging presentation.
     Experimental games, on the other hand, are highly interactive, allowing participants to modify elements of the game and directly assess the impacts of their decisions on system or process performance.  Multiple analyses can be performed in search of an optimal solution.  The ability to manipulate the system and receive feedback on the effects of each change often leads to deeper understanding of the systems and processes simulated.  The resulting competence improves safety and efficiency of the real-world counterparts when the students become managers.
     Learning games can also be categorized according to their level of complexity.  One such taxonomy [Wood, 2007] includes insight games, analysis games, and capstone games.  Wood’s taxonomy of games is summarized in Exhibit 1Insight games seek to develop understanding of basic concepts and context required for students to comprehend subsequent material and advanced concepts.
     Students develop required skills “through iterations of hypothesis, trial, and assessment” [Wood, 2007] in analysis games.  These games seek to bridge the gap between understanding a concept and performing a related task effectively.  Wood cites the example of riding a bicycle; a student may understand the task by reading its written description, but this does not ensure a safe ride upon first attempt.  Practice is needed to match ability with understanding.
     Capstone games, as you may have guessed, require participants to consider multiple objectives or perspectives.  These games incorporate multiple disciplines in a single game to simulate the complexity of decisions that real-world managers must make.  When conducted as a team exercise, with members of varying background and experience, capstone games can provide a very realistic approximation of situations faced by managers on a regular basis.
Stages of Game Play
     Learning games typically proceed in three stages:  preparation, play, and debriefing.  Players and facilitators bear responsibility for a successful game in each stage.  As is the case with any type of training, engagement of participants is critical to success.
     In the preparation stage, facilitators are responsible for “setting the stage” for game play.  This may include preparing presentations to explain the rules of the game, assigning students to teams within the group, or stocking the physical space with required materials or accommodations.  Players are required to review any information provided in advance and procure any material they are expected to provide.
     During game play, players are expected to remain engaged, maximizing the learning benefit for themselves and others through active participation and knowledge-sharing.  Facilitators enforce rules, such as time limits, and may have to “keep score” as the game progresses.  Facilitators also answer questions, provide guidance to ensure a successful game, and monitor the proceedings for improvement ideas.
     An effective debriefing is essential to a successful learning game.  It is in this stage that participants’ performance is evaluated.  Critiques of decisions made during the game provide participants with valuable insights.  In many game configurations, this is where the greatest learning occurs; it provides an opportunity to learn from the experience of other teams, including situations that may not have occurred in one’s own game.  Facilitators are responsible for providing information participants may need in order to understand why certain decisions are better than others.  Players may assist facilitators in providing critiques and explanations, ensuring that all participants develop the understanding necessary to apply the new information to future real-world scenarios.
 
Benefits of Learning Games
     Learning games provide many benefits to players and the organizations that employ them.  Advantages include the number of people that can be effectively trained, the time and expense required for training, and the risk of poor performance.  The advantages of learning games relative to on-the-job-training is summarized in Exhibit 2, where the “real world” is compared to a learning game environment.
     Learning games may also offer additional benefits related to onboarding, team-building, or unique aspects of your organization.  Consider all possible benefits to be gained when evaluating learning games for your team.
 
Example Learning Games
     While some organizations may use proprietary learning games, many are widely distributed, often for free or at very low cost.  Several learning games are cited below, but only to serve as inspiration.  Assessment of each should be conducted in the context of the group to be trained.  Therefore, the time and space required to provide a thorough review of each would provide little value.  Also, doing so may encourage readers to limit their choices to those games mentioned here which is contrary to the objective of this post.

Physics/Engineering:  Whether your interest is in equations of motion or medieval warfare, the Virtual Trebuchet is for you.  Players define several attributes of the trebuchet, launch a projectile, and observe its trajectory.  Peak height, maximum distance, and other flight data are displayed for comparison of different configurations.  Visually simple, highly educational, and surprisingly fun, even the non-nerds among us can appreciate this one.
Project Management:  The Project Management Institute (PMI) Educational Foundation has developed the Tower Game for players ranging from elementary school students to practicing professionals.  Teams compete to build the “tallest” tower in a fixed time period with a standard set of materials.  “Height bonuses” are earned through resource efficiency.  The game can be customized according to the group’s experience level.
Operations Management:  Considering the complexity of operations management, it is no surprise that these games are among the most sophisticated.  OMG!, the Operations Management Game, is a tabletop game that maps a production process with tablemats for each step.  Each step is represented by a single player; the number of steps can be varied to accommodate groups of different size.  Physical artifacts represent work-in-process (WIP) and finished goods inventories, and dice are used to simulate demand and process variability.  Upgrades are available when sufficient profits have been retained to purchase them.  Many important aspects of operations management are included in this game; it could be a valuable learning tool.
            A camshaft manufacturing process is modelled in Watfactory, a game used to study techniques of variation reduction.  It includes a large number of variables – 60 variable inputs, 30 fixed inputs, and 3 process step outputs – and several investigation options that define data analyses to be performed.  This one is not for beginners or the faint of heart, but is a solid test of skills.
Supply Chain Management:  Revered as the granddaddy of all learning games, the Beer Game was first developed at MIT in the 1960s to explore the bullwhip effect in supply chains.  Since then, many alternate versions have been developed, including virtual ones.
Business StrategyBizMAP is a game that can be used to assess an individual’s aptitude for entrepreneurship or suitability for an executive leadership role.  If one trusts its predictive capability, it could be an extremely valuable aid in averting disasters.  Poor executive decision-making or an ill-advised decision to quit one’s day job can be avoided.
     Many other learning games are available with differing objectives, configurations, and complexity.  Other professional practices, including Managerial Accounting (!) and bridge design can be explored using learning games.  Explore and be amazed!
 
Serious Business
     In some circles, learning games have become serious business.  High-stakes decisions are often based on simulations.  Ever heard of War Games?  The accuracy and reliability of such simulations is a serious matter indeed.
     Less costly in terms of human life, but potentially catastrophic in financial terms, businesses may simulate the competitive landscape in which they operate.  If invalid assumptions are made, or the simulation otherwise misrepresents the competitive marketplace, decisions based on it could be financially ruinous.
     The development of the learning games industry reflects just how serious it has become.  Industry conferences, such as the Serious Games Summit and the Serious Play Conference, are held for serious gamers to share developments with one another.  A professional association – Association for Business Simulation and Experiential Learning (ABSEL) – has also been chartered to serve this growing community.
            In the early 2000s, MIT embarked on its Games to Teach Project, a collaboration aimed at developing games for science and engineering instruction.  Decades after launching the movement, MIT’s ongoing commitment to learning games is reflected in the Scheller Teacher Education Program, in which these tools play a prominent role.
 
     No matter your field of study or level of experience, chances are good that a learning game has been developed for your target demographic.  However, improvements can always be made.  If you are an aspiring programmer or game developer, a learning game is an excellent vehicle for demonstrating your skills while providing value to users.  It will look great on your resume!
 
     If you have a favorite learning game, or an idea for a new one, please tell us about it in the comments section.  If you would like to introduce learning games to your organization, contact JayWink Solutions for additional guidance.
 
References
[Link] “The Role of Computer Games in the Future of Manufacturing Education and Training.”  Sudhanshu Nahata; Manufacturing Engineering, November 2020.
[Link] “Online Games to Teach Operations.”  Samuel C. Wood; INFORMS Transactions on Education, 2007.
[Link] “Game playing and operations management education.”  Michael A. Lewis and Harvey R. Maylor; International Journal of Production Economics, January, 2007.
[Link] “A game for the education and training of production/ operations management.”  Hongyi Sun; Education + Training, December 1998.

 
Jody W. Phelps, MSc, PMP®, MBA
Principal Consultant
JayWink Solutions, LLC
jody@jaywink.com
]]>
<![CDATA[Safety First!  Or is it?]]>Wed, 16 Dec 2020 15:30:00 GMThttp://jaywinksolutions.com/blog/safety-first-or-is-it     Many organizations adopt the “Safety First!” mantra, but what does it mean?  The answer, of course, differs from one organization, person, or situation to another.  If an organization’s leaders truly live the mantra, its meaning will be consistent across time, situations, and parties involved.  It will also be well-documented, widely and regularly communicated, and supported by action.
     In short, the “Safety First!” mantra implies that an organization has developed a safety culture.  However, many fall far short of this ideal; often it is because leaders believe that adopting the mantra will spur the development of safety culture.  In fact, the reverse is required; only in a culture of safety can the “Safety First!” mantra convey a coherent message or be meaningful to members of the organization.
Safety Vocabulary
     All those engaged in a discussion of safety within an organization need to share a common vocabulary.  The definitions of terms may differ slightly between organizations, but can be expected to convey very similar meanings.  The terms and descriptions below are only suggestions; each organization should identify all terms necessary to sustain productive discussions and define them in the most appropriate way for their members.
     An organization with a well-developed safety culture will be relentless in identifying hazards in the workplace.  A hazard is an environment, situation, or practice that could result in harm to an individual or group.  Examples include elevated work platforms, energized electrical equipment, and chemical exposure.  Many hazards exist throughout the typical workplace; each should be evaluated for the feasibility of elimination.
     Risk is the likelihood that a hazard will cause harm and the extent or severity of that harm.  “High-risk” endeavors are usually considered those that exhibit a high probability of severe or extensive harm.  Defining categories of risk can aid in prioritizing mitigation and elimination efforts.
     An accident is an occurrence of harm – usually physical – to an individual or group.  The effects of an accident may be immediate, as in the case of a fall, or protracted, as in the case of radiation exposure.  A near miss is an occurrence that could be expected to cause harm, but the individuals involved are fortunate enough to avoid it.  An electrical arc within an open panel that does not seriously shock the technician working on it is a near miss that demonstrates the importance of personal protective equipment (PPE).
     PPE is the last line of defense against injury.  It is a broad term that includes any device used to protect an individual from harm, including safety glasses, ear plugs, hard hats, insulating gloves, steel-toed shoes, cut-resistant garments, face shields, retention harnesses, and breathing apparatus.  PPE that is adequately specified and properly worn can reduce the risk of an activity, but cannot eliminate it.  It can prevent or reduce injury in many cases, but provides no guarantee.  At best, PPE can turn an accident into a near miss.
     Accidents, near misses, and property-damage events are often called, collectively, incidents.  Use of this term is convenient for ensuring an investigation occurs, regardless of the type of event.  The difference between a near miss or property-damage event and an accident is often pure coincidence or good fortune.  These are not reliable saviors; they should not be expected at the next incident.
     Dangerous behavior includes activities that can be expected to result in an incident.  Examples include driving a loaded forklift too fast without checking for traffic at intersections, failing to remove combustible material from an area before cutting or welding equipment is used, and carelessly handling dangerous chemicals in open containers.  Behaviors develop for various reasons; it could signal a lack of training, complacency, or malicious intent.  In any case, corrective action must be taken to correct the behavior before an incident occurs.
 
Elements of Safety Culture
     The steps required to develop a safety culture are not particularly difficult to understand.  They may be difficult to implement, however, because they require a deep commitment of decision-makers to prioritize safety over other concerns.  When pressured to meet production requirements or cost-reduction targets, managers can be tempted to abandon safety-focused initiatives that they perceive as threats to the attainment of other metrics.  Commitment at the highest levels of an organization is required to prevent the sacrifice of safety to competing objectives.
     A common theme among discussions of organizational programs is documentation.  Developing a culture of safety, like so many other initiatives, requires a significant amount of documentation.  The value of documentation transcends its functional attributes; it provides evidence of commitment of the organization’s leadership to treat safety as its highest priority.  Documentation replaces verbal attestations and platitudes with “rules of the road,” or expectations of conduct at all levels of the organization.
     Documentation can become extensive over time.  At a minimum, it should include:
  • Hazard identification:  descriptions of exactly what makes an operation, machine, place, etc. potentially dangerous.
  • Risk assessment:  for each hazard, evaluate the likelihood and severity or extent of harm.
  • PPE required:  specify the personal protective equipment prescribed to protect individuals from known hazards.
  • Training requirements:  define all training required for an individual to remain safe from known hazards.
  • Work instructions:  detailed procedures to ensure safe practices in routine work.
  • Maintenance instructions: detailed procedures to ensure safe practices in non-routine work.
  • Assessments:  reviews and evaluations of program effectiveness.
Additional documentation may be deemed necessary as the culture matures.  Specifically, conducting safety program assessments may reveal the need for additional procedures or other refinements of the documentation package.
     Several other characteristics must be present to sustain a successful safety culture; chief among them is effective communication.  Reports of incidents and safety metrics must be viewed as communication vehicles, not the sole required output of a safety program.  This means that incidents are investigated, not merely reported.  Root causes must be found, addressed, and communicated, updating members’ understanding of the hazards they face.
     Communication must also be open in the opposite direction.  Team members should be provided a clearly-defined channel for communicating safety concerns and suggestions to those responsible for implementing changes.  This hints at another critical element of safety culture – security.  If team members fear reprisal for reporting issues, safety and morale will suffer.  Each member should be encouraged to activate this communication channel and given the confidence to do so whenever they see fit.
     All incidents – accidents, near misses, and property-damage events – should be investigated with equal intensity.  Non-accident incidents are simply precursors to accidents; thorough investigation provides an opportunity to prevent future accidents or other incidents.  Open communication and investigative responses are needed to ensure that incidents – near misses, in particular – are reported.
     Non-routine work is a significant source of workplace incidents.  When maintenance and repair personnel are rushed to release equipment, the risk of injury increases proportionally to the stress they feel.  Failing to provide ample time to perform non-routine tasks carefully and thoroughly increases risk to maintenance and operations personnel.
     In their haste to return equipment to service, technicians may take shortcuts or fail to strictly adhere to all prescribed safety procedures, increasing the risk of an incident.  Once the equipment is returned to service, shortcuts taken could reduce its reliability, precipitating a catastrophic failure.  Such a failure may cause injury of operators or a near miss in addition to property damage.  A culture of safety allows sufficient time for non-routine work to be performed carefully and to be thoroughly tested before equipment is returned to service.
     Similarly, excessive time pressure on investigations of issues, training, or production can lead to stress, shortcuts, and distraction.  All of these considerably increase the probability of an incident occurring.  No one should have to rely on luck to avoid injury because the pressures to which they are subjected make an incident nearly inevitable.
     In a mature safety culture, culpability of individuals will be assessed according to the type of error committed.  If a blameless error – one that “could happen to anyone” – is committed, or a design flaw that invites misinterpretation is discovered, efforts should be made to mistake-proof the system (see “The War on Error – Vol. II:  Poka Yoke”).  If dangerous behavior or intentional violations cause an incident, disciplinary action may be taken.  Other considerations include an individual’s safety record, medical condition, and drug (prescription, OTC, or illicit) use.
     To ensure consistent treatment of all personnel (no playing favorites), an evaluation tool, such as Reason’s Culpability Decision Tree, shown in Exhibit 1, can be used.  Administered by an impartial individual or small group, the series of questions will lead to consistent conclusions.  Responding to each type of error with predefined and published actions will further support the team’s perception of procedural justice.
     While Reason’s Culpability Decision tree will likely lead to a majority of responses requiring no punitive action, it is preferable to a “no blame” system.  A no blame system provides little opportunity to correct patterns of behavior, even when negligent or reckless.
     A culture of safety espouses a continuous improvement mindset.  In this vein, periodic reviews should be conducted to evaluate the effectiveness of the systems in place to ensure safety.  Over time, equipment degrades and is often modified or upgraded, changing the characteristics of its safe operation and maintenance.  Operators and technicians come to recognize previously unidentified hazards and may develop well-intentioned work-arounds.  A periodic review reminds personnel of the communication channels available to them and allows the documentation and procedures to be updated to reflect the current condition of the system.
     Results of routine health screenings should be included in these reviews to assess the effectiveness of prescribed PPE and documented procedures.  In addition to vision and hearing checkups, individuals should be examined for signs of repetitive stress or vibration-induced disorders.  They should also have the opportunity to discuss their stress levels and overall well-being.  It may be possible to identify negative trends before conditions become debilitating.  Preventive measures can then be implemented to safeguard employees from injury.  One of the simplest responses to many work-related issues is to schedule task rotations.  Doing so reduces the risk of repetitive stress disorders and errors caused by complacency or boredom.
 
     Organizations that have mature safety cultures prioritize the well-being of their employees, customers, and community.  The evolution of an organization can start with one person.  You shouldn’t expect it to be easy or fast, but doing what’s right is timeless.
 
     For assistance in hazard identification, risk assessment, developing procedures, or other necessities of safety culture, leave a comment below or contact JayWink Solutions directly.
 
References
[Link] “A Roadmap to a Just Culture:  Enhancing the Safety Environment.”  Global Aviation Information Network (GAIN), September 2004.
[Link] The 12 Principles of Manufacturing Excellence.  Larry E. Fast; CRC Press, 2011.
[Link] “The health and safety toolbox.”  UK Health and Safety Executive, 2014.

 
Jody W. Phelps, MSc, PMP®, MBA
Principal Consultant
JayWink Solutions, LLC
jody@jaywink.com
]]>
<![CDATA[The War on Error – Vol. IX:  Taguchi Loss Function]]>Wed, 02 Dec 2020 15:30:00 GMThttp://jaywinksolutions.com/blog/the-war-on-error-vol-ix-taguchi-loss-function     Choosing effective strategies for waging war against error in manufacturing and service operations requires an understanding of “the enemy.”  The types of error to be combatted, the sources of these errors, and the amount of error that will be tolerated are important components of a functional definition (see Vol. I for an introduction).
     The traditional view is that the amount of error to be accepted is defined by the specification limits of each characteristic of interest.  Exceeding the specified tolerance of any characteristic immediately transforms the process output from “good” to “bad.”  This is a very restrictive and misleading point of view.  Much greater insight is provided regarding product performance and customer satisfaction by loss functions.
     Named for its developer, the renowned Genichi Taguchi, the Taguchi Loss Function treats quality as a variable output.  In contrast to the “goal post” philosophy described above, which uses a digital step function, the Taguchi Loss Function quantifies quality on an analog scale.
     According to goal-post philosophy, characteristic values that fall inside the tolerance band by the slightest amount are acceptable (no quality loss) and those that fall outside the tolerance band by the slightest amount are unacceptable (total loss).  This is shown conceptually in Exhibit 1.  Recognizing that the difference in quality between these two conditions was quite small, Taguchi realized that quality at any point in the characteristic range could be expressed relative to its target value.  Furthermore, he concluded that any deviation from a characteristic’s target value represents a corresponding reduction in quality with a commensurate “loss to society.”  The Taguchi Loss Function is shown conceptually in Exhibit 2.
Sources of Loss
     Many struggle to understand Taguchi’s characterization of variable quality as loss to society.  It is, however, a clear connection once given proper consideration.  Resources consumed to compensate for reduced quality are unavailable for more productive use.  Whether the effects on consumers and providers are direct or indirect, each incurs a loss.
     Sources of loss are varied and may not be immediately obvious, in large part due to the paradigm shift required to transition from goal post thinking.  The most salient loss to providers comes in the form of scrap and rework.  Material, labor, and time losses are easily identifiable, even by goal-post thinkers.  Somewhat more difficult to capture fully, warranty costs are also salient to most providers.  Lost productive capacity, reverse logistics, and product replacement costs can be substantial.  Worse, a damaged reputation and subsequent loss of good will in the marketplace can result in lost sales.  Some existing customers may remain loyal, but attracting new customers becomes ever more difficult.  The difficulty calculating this loss accurately further compounds the problem; misattribution of the cause of slumping sales will make a turnaround nearly impossible.
     Consumers experience losses due to reduced product or service performance.  If the time required to receive service or to use a product increases, this is a direct cost to the consumer.  Reduced performance lowers the value of the product or service and may prompt consumers to seek alternatives.
     A product that requires more frequent maintenance or repair than anticipated incurs losses of time, money, and productivity for consumers.  This could, in turn, lead to warranty costs, reputational damage, and lost sales for the producer.
     The examples cited are common losses and are somewhat generic for broad applicability.  Careful review of any product or service offered is warranted to discover other sources of loss.  Safety and ergonomic issues that could lead to injury and liability claims should be front-of-mind.  Environmental impacts and other regulatory issues must also be considered.  There may be other sources of loss that are unique to a product type or industry; thorough consideration of special characteristics is vital.
 
Nominal is Best
     The most common formulation of the Taguchi Loss Function is the “nominal is best” type (see Exhibit 2).  Typical assumptions of this formulation include:
  • Target (“best”) value is at center of tolerance range.
  • Loss = 0 at target value.
  • Characteristic values are normally distributed.
  • Out-of-spec conditions are reliably captured.
  • Failures are due to compound effects (i.e. each characteristic is within its specified range, but the combination is unsatisfactory).
     For product manufacturing, the scrap value may be used to calculate the loss function.  If out-of-tolerance conditions can escape, or other conditions exist that would cause greater losses, the calculation can be modified accordingly.  Also, service providers can define the maximum loss as is most appropriate for them, such as repair cost, loss of future sales, etc.  Whatever definition is chosen, it should be applied consistently to allow for comparisons across time, products, or service offerings.  Using the scrap value of a product provides consistency that may be difficult to achieve with other definitions of maximum loss.
     The nominal is best Taguchi Loss Function is a quadratic function of the form:
            L(x) = k (x – t)^2, where:
  • L(x) is the loss experienced at characteristic value x;
  • k is the loss coefficient, a proportionality constant;
  • x is the measured (actual) characteristic value;
  • t is the target characteristic value.
     To calculate k, using the assumptions stated above, set x equal to the maximum allowable value of the bilateral tolerance (USL) and L(x) equal to the maximum loss (i.e. scrap cost).  Solve for k by rearranging the loss function expression:  k = L(x)/(x – t)^2.
     With a known k value, the loss incurred by the deviation from the target characteristic value can be calculated for each occurrence.  Total and average loss/part can then be calculated and used to analyze process performance.
     For a visual comparison of the goal post model and the loss function, use the interactive model from GeoGebra.
 
Other Loss Function Formulations
     There are two special cases that warrant particular attention:  Smaller is Better and Bigger is Better.  When lower characteristic values are desirable – i.e. zero is ideal – the loss function can be simplified by setting t = 0.  The resulting smaller is better formulation is L(x) = kx^2, where k is calculated using the value of x at which a product would be scrapped or a process ceased.  Example characteristics that utilize this formulation include noise levels, pollutant emissions, and response time of a system.  The smaller is better loss function is shown conceptually in Exhibit 3.
     When larger characteristic values provide greater performance or customer value, the bigger is better formulation of the loss function is used, in which the inverse of the characteristic value is used to calculate losses.  It takes the form L(x) = k (1/x)^2.  At x = 0, the loss would be infinite – an unrealistic result.  More likely, there is a minimum anticipated value of the characteristic; this value should be used to calculate k.  This value also defines the maximum expected loss per unit.  The bigger is better loss function is shown conceptually in Exhibit 4.
     This formulation also suggests that zero loss cannot be achieved.  Doing so requires the characteristic to reach an infinite value – another unrealistic result.  In practical terms, there may be a value beyond which the relative benefits are imperceptible, resulting in an effectively zero loss.  This idiosyncrasy does not diminish its comparative value.
 
Relationship to Other Programs
     Six Sigma initiatives attempt to reduce variation and center process means in their distributions.  Though the objectives are often defined in goal post terminology, Six Sigma is highly compatible with the paradigm of the loss function.  Both pursue consistent output, though they value that output differently.
     The Taguchi Loss Function fosters a continuous improvement mindset.  Until all losses are zero, there are potential improvements to be made.  The loss function formulations presented provide a method to determine the cost-effectiveness of proposed improvement projects.  First, a baseline is established; then anticipated gains (reduced losses) can be calculated.  If the anticipated gains exceed planned expenditures, the project is justified.
     Project objectives may even be defined by a loss function.  That is, a target loss value may be defined; then solutions are sought to achieve the target level.  This application is not in common use; the difficulty in achieving the necessary paradigm shift ensures it.
 
Summary
     A mindset adjustment is required to transition from traditional (goal post) quality evaluations to Taguchi Loss Functions.  If this is achieved, they can be used to effect by following a simple procedure:
  • Define the reference loss (e.g. scrap cost).
  • Define the value of the characteristic at which this loss is incurred.
  • Calculate the loss coefficient, k.
  • Calculate the loss incurred by each deviation from target values.
     At this point, the process branches according to the objectives sought.  Totals and averages can be calculated, comparisons made, or other analyses conducted.  The loss function provides a useful reference and encourages an expanded view of perceived quality.
 
     For assistance introducing loss functions to your organization or pursuing other operational improvement efforts, contact JayWink Solutions for a consultation.

     For a directory of “The War on Error” volumes on “The Third Degree,” see “Vol. I:  Welcome to the Army.”
 
References
[Link] “Taguchi loss function.”  Wikipedia.
[Link] “Taguchi Loss Function.”  WhatIsSixSigma.net
[Link] “Taguchi Loss Function.”  Lean Six Sigma Definition.
[Link] “Taguchi loss function.”  Six Sigma Ninja, November, 11, 2019.
[Link] “The Taguchi loss function.”  Thomas Lofthouse; Work Study, November 1, 1999.
[Link] “Taguchi’s Loss Function.”  Elsmar.com.
[Link] “Principles of Robust Design.”  Dr. Nicolo Belavendram; International Conference on Industrial Engineering and Operations Management, July 2012.
[Link] “Robust Design Seminar Report.”  Shyam Mohan; November 2002.

 
Jody W. Phelps, MSc, PMP®, MBA
Principal Consultant
JayWink Solutions, LLC
jody@jaywink.com
]]>
<![CDATA[Making Decisions – Vol. VII:  Perils and Predicaments]]>Wed, 18 Nov 2020 15:30:00 GMThttp://jaywinksolutions.com/blog/making-decisions-vol-vii-perils-and-predicaments     Regardless of the decision-making model used, or how competent and conscientious a decision-maker is, making decisions involves risk.  Some risks are associated with the individual or group making the decision.  Others relate to the information used to make the decision.  Still others are related to the way that this information is employed in the decision-making process.
     Often, the realization of some risks increases the probability of realizing others; they are deeply intertwined.  Fortunately, awareness of these risks and their interplay is often sufficient to mitigate them.  To this end, several decision-making perils and predicaments are discussed below.
     Group decision-making can become quite time-consuming.  The iterative, and sometimes combative, nature of group processes can cause a conclusion to seem rather elusive.  This risk can be reduced through careful consideration of the group’s membership prior to any discussion or debate (see “Who should be in the decision-making group?” in Vol. IV).
     To reduce the time required to reach a decision, a unilateral decision may be made (see “Who should make the final decision?” in Vol. IV).  This decision rule, however, is fraught with risk.  The type of person that would insist on making a unilateral decision is the same type that often suffers from overconfidence.  Overconfidence can be a byproduct of pure arrogance, but could also have less loathsome origins.  Underdeveloped judgment, or limited context-specific experience may cause a decision-maker to miss important cues or underestimate the severity of a situation.
     Another source of overconfidence is unreliable or inappropriate heuristics.  Prior satisfactory results using such methods could be merely coincidental, but lead to the application of heuristics to an ever-wider range of situations.  Once their limit of applicability – the range often being quite narrow – has been exceeded, heuristics can become dangerous, particularly when they build confidence without building competence.
     Whatever the source of overconfidence, it may lead a decision-maker to accept incomplete, or even suspect, information and limit analysis of the information available.  The overconfident decision-maker believes s/he can overcome these limitations with his/her superior wisdom and insight.  Unfortunately, this is rarely true.
     If a leader chooses not to make a decision, s/he may delegate the responsibility to a subordinate.  Delegating carelessly could have serious consequences.  The delegate could suffer from inexperience and overconfidence, could be ethically challenged or unduly influenced by politics (office or otherwise), “celebrity,” or other irrelevant factor.  Delegate responsibly!  (See “Of Delegating and Dumping”).
 
     The framing of a decision is a source of significant risk.  A decision definition that cites problems and risks typically prompts a very different response than one described in terms of challenges and opportunities.  Framing that influences a decision may be inadvertent, but may also be a deliberate attempt to manipulate decision-makers.  Those that present a situation to decision-makers may be able to predict their responses based on known affinities, biases, or obligations.  Using this insight, the subordinate can prime decision-makers to act in the manner the subordinate finds most favorable by framing the decision in such a way that triggers a bias, creates the impression of a potential violation of an obligation, or otherwise guides their thinking.  This is a tactic used to “tip the scales” toward the desired outcome while making the ostensible decision-maker an unwitting accomplice.
     A decision-maker may anchor on a solution early in the process.  It may be the first idea s/he heard, or the most glamorous, high-tech solution available.  It could be the solution that requires his/her particular expertise and is, therefore, the most familiar or comfortable.  It could also be the solution that s/he was primed to choose by the presentation of the problem and solution options.  Anchoring often leads to confirmation bias, where the decision-maker accepts only information that is confirmatory of the foregone conclusion.  If the decision-maker cannot be dislodged from an anchor, it usually results in a suboptimal, or satisficing, solution.
     Employing a decision-making method such as the Analytic Hierarchy Process (AHP) (see Vol. III), can facilitate an anchor-free decision.  The pairwise comparisons used in AHP are difficult to manipulate in order to reach a predetermined outcome.  As decision complexity, or number of criteria, increases, the difficulty of manipulation increases rapidly.
     Failure to recognize that “do nothing” is a valid option to be considered in many scenarios can place a satisfactory outcome in jeopardy.  As mentioned in Vol. I, there may be no alternative under consideration that will result in an improvement relative to the status quo.  The decision to implement something could result in the waste of significant resources – time, energy, and money.  Incurring these opportunity costs may preclude the pursuit of advantageous projects in favor of misguided endeavors, or “optics.”
 
     Even after a decision is made, perils remain.  Hindsight bias manifests in similar fashion to confirmation bias.  While confirmation bias causes the selective acceptance of information concurrent with the decision-making process, hindsight bias causes selective acceptance of historical information to justify a past decision.  This usually occurs when an investigation of the causes of unsatisfactory results is initiated; the decision-maker wants to defend his/her decision as “correct” despite a disappointing outcome.
     Defense of past decisions – to “save face” or other reasons – can lead to an escalation of commitment, where future decisions are influenced by those made in the past rather than objective analysis.  This is closely related to the sunk costs fallacy, where continued commitment is justified – irrationally – by past expenditures.  Both assume that conditions will change in such a way that will turn a failing endeavor into a success, or that it can be turned around if only it receives more investment.  It is important to learn from “bad” decisions, cut your losses, and move on.
 
     If you’d like to add to this list of decision-making perils and predicaments, feel free to leave a comment below.  Personal insights that help the community learn and grow are always welcome.
 
     For a directory of “Making Decisions” volumes on “The Third Degree,” see “Vol. I:  Introduction and Terminology.”
 
References
[Link] “An Overview of Decision-Making Models.”  Hanh Vu, ToughNickel, February 23, 2019.

 
Jody W. Phelps, MSc, PMP®, MBA
Principal Consultant
JayWink Solutions, LLC
jody@jaywink.com
]]>
<![CDATA[Performance Continuity via Effective Pass-Down]]>Wed, 04 Nov 2020 15:30:00 GMThttp://jaywinksolutions.com/blog/performance-continuity-via-effective-pass-down     Myriad tools have been developed to aid collaboration of team members that are geographically separated.  Temporally separated teams receive much less attention, despite this type of collaboration being paramount for success in many operations.
     To achieve performance continuity in multi-shift operations, an effective pass-down process is required.  Software is available to facilitate pass-down, but is not required for an effective process.  The lowest-tech tools are often the best choices.  A structured approach is the key to success – one that encourages participation, organization, and consistent execution.
Participation
     The topic of participation in pass-down encompasses three elements that must be defined and monitored:  the groups that need to participate, whom will represent that group in pass-down activities, and the types of activity required.
     There are various groups present in any organization.  Some are common participants in pass-down activities; others are often overlooked in this process.  Several are presented below, with examples of topics that each may discuss during pass-down.

Operations:  The operations group should share any information related to their ability to meet production expectations.
  • productivity (production rate), causes of reductions or interruptions
  • absenteeism, staffing changes, and other personnel issues
  • special orders, trials, material availability
  • production schedule and any changes
  • any unusual occurrences, process instability, or process deviations “work-arounds”)
  • incoming material quality concerns
Maintenance:  The maintenance team should discuss any significant occurrences and ongoing situations.
  • conditions to be monitored
  • temporary repairs in place
  • projects or repairs in progress
  • PM/PdM status (complete/overdue)
  • downtime report – what occurred during the shift, what equipment remains down, and what is required to return it to service
  • any mater that requires priority
Engineering:  The engineering department should discuss the status and requirements of any ongoing investigations or projects under its direct supervision.
  • equipment installation or upgrade
  • prototyping in progress
  • trial runs or designed experiments
  • data collection
  • conditions or occurrences under investigation
Quality:  The quality assurance group should share information about any non-standard conditions or processes.
  • active quality alerts
  • calibration status (complete/overdue)
  • active deviations
  • process validations or other layouts in progress
  • material sorting in progress (incoming, WIP, F/G)
  • rework in progress
Logistics/Materials:  The materials management group should ensure awareness of any activity that could affect the other groups’ abilities to perform planned activities.
  • material shortages
  • urgent deliveries being tracked
  • inventory/cycle count accuracy (shrinkage, unreported scrap, etc.)
Safety, Health, Environment:  The SHE team should discuss any unresolved issues and recent occurrences.
  • injuries, spills, and exposures
  • accidents and near-misses
  • ergonomic evaluations in progress or needed
  • drills to be conducted
  • system certifications needed
  • training requirements
Security:  The security team should share information about any non-routine activities that require their attention.
  • illegal or suspicious activity in or near the facility
  • contractor access required (e.g. special areas of the facility)
  • security system malfunctions (e.g. camera, lighting, badge access) and response plan

     Depending on the size of the organization and the specific activities in which it engages, the groups mentioned may or may not be separate entities.  The example topics, however, remain valid regardless of the roles or titles of the individuals participating in the information exchange.  Likewise, there is significant overlap in the information needs of these groups.  For example, engineering and quality routinely work together to design and conduct experiments and collect data.  The maintenance group may make adjustments to a process to support an experiment, while the operations group runs the equipment.  If a third-party contractor is needed, notifying security in advance could accelerate the contractor’s access to the facility, keeping the experiment on schedule.
Each group conducting a pass-down may have a single representative that compiles and exchanges information; there could also be multiple members of any group participating.  The appropriate representation depends on the number of topics to be discussed and each member’s knowledge of them.  In general, additional participants are preferable to incomplete information.
     The activity of each participant is prescribed by the process chosen or required by circumstances.  In-person discussions are often the most effective; each participant has the opportunity to ask questions, get clarifications, and discuss alternatives.  Face-to-face meetings should be supported by documentation for future reference.  Also, when in-person discussions cannot be held (i.e. non-overlapping shifts), the documentation, on its own, serves as the pass-down.  For this reason, providing coherent written pass-downs is a critical habit that all team members must form.
     The form of pass-down used defines the general activity requirements.  The example topics, provided above for several groups, allude to some specific activities that may be required of participants.  Further specific tasks are defined by the way in which pass-down information is organized, collected, and shared.  This is our next topic.
 
Organization
     There are multiple facets to the organization of an effective pass-down process.  The organization of people was discussed in “Participation,” above.  Next is the organization of information on the written pass-down reports.  Formatted report templates are useful for this purpose.  A template ensures that no category of relevant information is overlooked; each has a designated space for recording.  Omission of any information type deemed necessary will be immediately obvious; the writer can be prompted to complete the form before details fade from memory.
     As mentioned in the introduction, software can be used to facilitate this process, but the form need not be digital – at least not at first.  Hand-written forms should be scanned and archived for use when an historical record is needed.  A whiteboard could be used for short-term notes; the board can be erased and reused once the information has been recorded in the permanent pass-down record.
     Whatever “tools” your organization chooses to use, keep the pass-down process as straightforward as possible.  The simpler to understand and the easier to use the better.  Less required training and more understanding result in wider use and greater value to the organization.
     When overlapping shifts permit in-person communication, the pass-down should be treated as any other meeting – an agenda, a schedule, and required attendance should be defined.  See “Meetings:  Now Available in Productive Format!” for additional guidance.
     Pass-down may be conducted by each department or function separately; however, overlapping information needs behoove an organization to consider alternative attendance schemes.  Conducting pass-down per production line, or value stream, is a common example.  All of the groups that support operations on that production line – maintenance, quality, engineering, logistics, etc. – share information with all others simultaneously.  This is an efficient method to ensure that the information required to effectively support operations has been provided to those who need it.
 
Consistency
     The greater the number of groups conducting pass-down activities within an organization, the more difficult it is to execute consistently.  Some elements of consistent execution have been discussed, such as a preformatted report, meeting agenda, and required attendance.  To ensure that consistency, once attained, is maintained, higher-level managers should audit – or attend regularly – as many pass-down meetings as possible.  Other members should also audit meetings they do not regularly attend.  Auditing other meetings facilitates the comparison of differing techniques, or styles, in pursuit of best practices.  All groups should be trained to use the best practices, restoring consistent execution.
     Pass-down meetings are also opportunities for “micro-training” or reminders on important topics.  Examples include safety tips (e.g. proper use of PPE, slip/fall prevention), reminders to maintain 5S and to be on the lookout for “lean wastes.”  This time could also be used for a brief description of a new policy, review of financial performance, or to introduce new team members.
     Micro-training may not be an obvious component of pass-down procedures, but it is helpful in encouraging consistency in many practices.  It is an excellent opportunity to highlight safety, teamwork, and other topics that are important to the group.  Achieving consistency in other activities, in turn, reinforces consistent execution of pass-down activities.
 
     Effective pass-down is an essential practice that is often neglected and allowed to disintegrate.  It does not capture headlines and it’s not glamorous; celebrities do not make public service announcements about it.  It does not require high-tech tools or billionaire investors to bring it to fruition.  Too often, the latest technology, social media platform, or other high-profile whiz-bang distracts leaders and managers from the things that are truly relevant to the performance of an organization.  Solid fundamentals, fanatically cultivated and relentlessly developed, pave the way to organizational success; losing sight of them can be disastrous.  Effective pass-down is one of these fundamentals.
 

     For assistance with establishing a robust pass-down process, repairing a broken, neglected system, or solidifying other fundamental practices, contact JayWink Solutions for a consultation.
 
References
[Link] “A Critical Shift: How Adding Structure Can Make Shift Handovers More Effective.”  Tom Plocher, Jason Laberge, and Brian Thompson; Automation.com, March 2012.
[Link] “Pass the shift baton effectively.”  Paul Borders; Plant Services, April 18, 2017.
[Link] “7 Strategies for Successful Shift Interchange.”  Maun-Lemke, LLC, 2006.
[Link] “Reducing error and influencing behavior.”  UK Health and Safety Executive, 1999.

 
Jody W. Phelps, MSc, PMP®, MBA
Principal Consultant
JayWink Solutions, LLC
jody@jaywink.com
]]>
<![CDATA[It Starts in the Parking Lot]]>Wed, 21 Oct 2020 14:30:00 GMThttp://jaywinksolutions.com/blog/it-starts-in-the-parking-lot     A person’s first interaction with a business is often his/her experience in its parking lot.  Unless an imposing edifice dominates the landscape, to be seen from afar, a person’s first impression of what it will be like to interface with a business is likely formed upon entering the parking lot.  It is during this introduction to the facility and company that many expectations are formed. “It” starts in the parking lot.  “It” is customer satisfaction.
     Retail-business owners and managers are often aware of concerns about parking accommodations, but most give it little thought and exceedingly few make it a priority.  Perhaps their awareness does not extend to the impact it could have on the bottom line.  The parking lot experience is a potential differentiator for competitive brick-and-mortar businesses and is critical when in competition with online retailers.  If “buy local” campaigns are to succeed, visits to hometown retailers must be convenient and pleasant.
     These experiences are often controlled by property managers instead of managers of the businesses served by the parking infrastructure.  Whether you own, manage, or lease the real property, there are a number of factors to be considered when building, selecting, or improving the physical space that serves one or more retail businesses.
  • The number of entrances and exits should ensure sufficient traffic flow for unencumbered access to the business.
  • Entrances and exits should be located such that adjacent traffic patterns do not hinder entry and departure of the property.
  • Aisles should be of sufficient width to allow two-way traffic flow.
  • Spaces should be of sufficient width to allow vehicle doors to be opened widely to facilitate vehicle egress and ingress without damaging adjacent vehicles.
  • Spaces should be of sufficient length to accommodate modern vehicles without protruding into the aisles.
  • There should be a sufficient number of spaces available to accommodate the anticipated peak volume of customers plus employees’ vehicles.  Whenever possible, this number should be increased to accommodate forecast error, business growth, contractors or other visitors, etc.
  • Frivolous structures that force parking spaces to be further from building entrances than necessary should be avoided.
  • Sufficient lighting should be provided to create a safe environment for patrons visiting in low-light hours.
  • Safety should be enhanced by avoiding obstructed views to the parking lot from any building or other area of the lot.  Unnecessary structures, recessed areas, tall bushes, and other places in which miscreants can hide should be avoided.
  • Provisions for managing the effects of inclement weather should also be in place.  Effective drainage is needed to prevent puddles from forming; snow and ice removal must also be accommodated if the climate calls for it.
  • The area should be kept free of trash, debris, or other refuse.
  • Repairs should be made promptly to potholes, curbing, signage, lighting, and so on.  Doing so can prevent further degradation, vehicle damage, customer mishaps that may lead to personal injury, or other liabilities.
 
     There may be additional considerations that require the attention of service providers, depending on the nature of the business.  Automotive services, in particular, must thoroughly consider their parking infrastructure.  In addition to providing ample space for customers, the responsibility to access and protect vehicles from damage extends to employees.  This type of business may also require additional parking in order to accommodate vehicles that require an extended storage period in addition to current and prospective customers’ vehicles.  Reviewing the nature of traffic flow through your business could pay dividends in efficiency and customer satisfaction.
 
     The factors outlined for retail businesses also apply to manufacturing facilities.  However, the magnitude, or priority, of some may differ due to the nature of manufacturing operations.  Because customer satisfaction begins with employee satisfaction, it remains an important topic.  Also, being closed to the public, more or less, complicates the matter somewhat.
     An example of a change in magnitude is the number of entrances and exits needed to ensure sufficient traffic flow.  Retail and service businesses typically experience variable, dispersed traffic flows.  Peak periods may be predictable, however, such as Friday afternoon at a bank, and can be managed in various ways.  Peak traffic flows at manufacturing facilities are highly predictable, with few options for mitigation.  Increasing the number of entrances and exits will reduce congestion at the beginning and end of each shift.
     Related to peak traffic flow, the number of spaces available is a common failure among many manufacturing facilities.  Scheduling overlapping shifts doubles the required number of spaces, yet few facilities provide sufficient parking to effectively support the chosen shift model.  This leads to parking in unauthorized places, such as spaces designated for visitors or handicapped parking, pedestrian walkways, or any space large enough to fit a vehicle.  Employees often return at their first opportunity to remove their vehicles from unauthorized spaces.  Both actions place the employee at risk of disciplinary action by the employer that created the problem!
     Security measures in place at many manufacturing facilities violate the “rules” or proper parking area design.  Specifically, parking areas are often located at an excessive distance from facility entrances.  Entry is often further impeded by sensitive landscaping, fences, or other mostly unnecessary structures that create circuitous paths that serve only to retard access by essential personnel while providing little, if any, legitimate security.
 
     Online businesses must also monitor their parking lots, though they are virtual.  A landing page should be inviting; the entire site should be easy to navigate and quick to load.  It should be easy to complete the desired transactions and clear when they are complete.  Online businesses may have an advantage over their real-world counterparts – no driving necessary –  but an unpleasant parking lot experience can be just as damaging in the virtual world as in the physical.
 
     The importance of parking infrastructure is routinely overlooked.  “It’s just the open space outside the building” is an unfortunately common mindset.  Careful consideration of design decisions will reveal potential impacts to customers and employees.  As early as possible, the detrimental effects should be eliminated and advantageous features accentuated.  Accessibility and required security protocols should be simple enough for all comers to understand.  It is often “the little things” that tip the scale in your favor when trying to attract customers and talented employees.  There are many components of customer satisfaction, but it always starts in the parking lot.
 
     Contact JayWink Solutions to discuss improvements in the customer experience at your business or other operations-related needs.
 
Jody W. Phelps, MSc, PMP®, MBA
Principal Consultant
JayWink Solutions, LLC
jody@jaywink.com
]]>
<![CDATA[The War on Error – Vol. VIII:  Precontrol]]>Wed, 07 Oct 2020 14:30:00 GMThttp://jaywinksolutions.com/blog/the-war-on-error-vol-viii-precontrol     There is some disagreement among quality professionals whether or not precontrol is a form of statistical process control (SPC).  Like many tools prescribed by the Shainin System, precontrol’s statistical sophistication is disguised by its simplicity.  The attitude of many seems to be that if it isn’t difficult or complex, it must not be rigorous.
     Despite its simplicity, precontrol provides an effective means of process monitoring with several advantages (compared to control charting), including:
  • It is intended for use on the shop floor for rapid feedback and correction.
  • Process performance can be evaluated with far fewer production units.
  • No calculations are required to perform acceptance evaluations.
  • No charts are required.
  • It is not based on an assumption that data is normally distributed.
  • It is straightforward, based on specification limits.
  • It uses a simple, compact set of decision rules.
     This installment of “The War on Error” explains the use of precontrol – how process monitoring zones are established, the decision rules that guide responses to sample measurements, and the fundamental requirements of implementation.  Some potential modifications will also be introduced.
Preparations
     Successful implementation of precontrol begins with an evaluation of the process to be monitored to ensure that it is a suitable application.  Information about existing processes should be readily available for this purpose.  New processes should be carefully considered, with comparisons to similar operations, to determine suitability.  Precontrol is best-suited to processes with high capability (i.e. low variability) and stability (i.e. output drifts slowly).
     Operators’ process knowledge is critical to the success of precontrol.  They must understand the input-output relationship being monitored and be capable of making appropriate adjustments when needed.  Otherwise, their interventions are merely process “tampering” that results in higher variability and lower overall performance.  Reliable measurement systems (see Vol. IV, Vol. V) are required to support effective process control by operators without excessive intervention.
 
Process Monitoring Zones
     Process monitoring zones, or precontrol zones, are based on the tolerance range of the process output, with the target value centered between the upper and lower specification limits (USL, LSL) of a bilateral (two-sided) tolerance.  The relationship of this tolerance range to input variables should be known in order to make effective process adjustments when needed.  Tolerance parallelograms, or other techniques, can be used for this purpose.
     To define the precontrol zones for a bilateral tolerance, divide the tolerance range into four equal segments.  The two segments that flank the target value are combined to create the green zone; measurements that fall in this zone are acceptable.  In normally-distributed data, this 50% of tolerance encompasses approximately 86% of process output.
The remaining segments, each containing 25% of the tolerance, are the yellow zones.  The yellow zones are often called warning zones, because measurements that fall in these zones may indicate that the process has drifted and requires adjustment.  In normally-distributed data, approximately 7% of process output will fall in each of the yellow zones.  These estimates assume that the width of the normal distribution matches the tolerance range and its mean equals the target value.
The precontrol red zones encompass all values outside the specification limits.  A graphical representation of bilateral tolerance precontrol zones is shown in Exhibit 1.
     There are three possible precontrol zone configurations for unilateral (one-sided) tolerances.  The first, called “zero is best,” simply divides the tolerance range by two.  The green zone encompasses the 50% of tolerance nearest zero (lower half) and a single yellow zone encompasses the remaining tolerance, up to the USL (upper half).  A single red zone encompasses all values above the USL.  This configuration is used for measurements that cannot produce negative values, such as surface roughness or yield loss.  “Zero is best” precontrol zones are shown graphically in Exhibit 2.
     The remaining two configurations are, essentially, mirror images of each other.  In one case, the LSL is defined, with no upper bound specified (“more is better”); the other defines the USL, while no lower bound is specified (“less is better”).  For each case, the tolerance range used to define precontrol zones is the difference between the specification limit (LSL or USL) and the best output that can be expected from the process (highest or lowest).  A single yellow zone encompasses 25% of the “tolerance” nearest the specification limit.
     The green zone includes the remaining 75% of “tolerance” and beyond.  Any measurements beyond the “best case” value should be investigated.  The expectations of the process may require adjustment, leading to recalculation of the precontrol limits.  It could also lead to the discovery of a measurement system failure.  Precontrol limits should also be reviewed as part of any process improvement project.
     The red zone in each case encompasses all values beyond the specification limit (y < LSL or y > USL).  “More is better” and “less is better” configurations of unilateral tolerance precontrol are presented graphically in Exhibit 3.
     Defined precontrol zones establish the framework within which process performance is evaluated.  The remaining component defines how to conduct such evaluations via decision rules that guide setup validation or qualification, run-time evaluations, and sampling frequency.  These decision rule sets are presented in the following section.
 
Decision Rules
     The first set of decision rules define the setup qualification process.  To approve a process for release to production, the measurements of five consecutive units must be in the green zone.  The setup qualification guidelines are as follows:
  • If five consecutive measurements are in the green zone, release process to production.
  • If one measurement is in a yellow zone, reset green count to zero.
  • If two consecutive measurements are yellow, adjust the process and reset green count to zero.
  • If one measurement is in a red zone, adjust the process and reset green count to zero.
     Repeat measurements until five consecutive measurements are in the green zone.  If significantly more than five measurements are regularly required to release a process, an investigation and improvement project may be warranted.
     Once the process has been released to production, a new set of decision rules are followed.  For run-time evaluations, periodic samples of two consecutive units are measured.  Response to the sample measurement results is according to the following guidelines:
  • If both measurements are green, continue production.
  • If one measurement is green and the other yellow, continue production.
  • If both measurements are in the same yellow zone, stop production and adjust the process.
  • If one measurement is in each yellow zone of a bilateral tolerance, stop production and investigate the cause of the excessive variation.  Eliminate or control the cause and recenter the process.
  • If either measurement is in a red zone, stop production and adjust the process.
     After any production stoppage or adjustment, return to the setup qualification guidelines to approve the process for release to production.  All units produced since the last accepted sample should be quarantined until a Material Review Board (MRB), or other authority, has evaluated the risk of defective material being present.  This evaluation may result in a decision to scrap, sort, or ship the quarantined material.
     The final decision rule defines the sampling frequency or interval.  The interval between samples can be defined in terms of time or quantity of output.  The target sampling interval is one-sixth the interval between process adjustments, on average.  Stated another way, the goal is to sample six times between required adjustments.  For example, a process that, on average, produces 60 units per hour and runs for three hours between required adjustments should be sampled once every 30 minutes or once per 30 units of production.  The sampling frequency may change over time as a result of learning curve effects, improvement projects, changes in equipment reliability, or other factors that influence process performance.
 
     The simplicity of precontrol, demonstrated by the division of the tolerance range into precontrol zones and easily-applied decision rules, makes it an attractive tool for implementation in production departments.  Administration of such a tool by those responsible for production maximizes its utility; it takes advantage of the expertise of process managers and operators and eliminates delays in response to signs of trouble in the process.
 
Modifications to Precontrol
     The formulation presented above may be called “classical precontrol;” it serves as the baseline system to which modifications can be made.  F. E. Satterthwaite’s original formulation (1954) prescribed a green zone containing 48% of the tolerance range and yellow zones containing 26% of the tolerance range each.  The 50%/25% convention was adopted for ease of recall and calculation in an era preceding electronic aids.  If such aids are in use, the choice of zone sizes is nearly imperceptible in practice.  Either scheme can be chosen, but it should be used consistently throughout an organization to avoid confusion.
     Two-stage precontrol retains the precontrol zone definitions and the setup qualification and sampling frequency rules of classical precontrol, but expands the run-time evaluation rules.  In two-stage precontrol, responses to sample measurement results are in accordance with the following guidelines:
  • If both measurements are in the green zone, continue production.
  • If either measurement is in a red zone, stop production and adjust the process.
  • If either measurement is in a yellow zone, measure the next three units.
    • If three measurements in the expanded sample (i.e. 5 units) are green, continue production.
    • If three measurements in the expanded sample are yellow, stop production and adjust the process.
    • If any of the measurements in the expanded sample are red, stop production and adjust the process.
     Proponents of two-stage precontrol claim that the existence of a yellow-zone measurement in a two-unit sample is an ambiguous result.  Therefore, further sampling is required to determine the condition of the process.
     Modified precontrol is a hybrid of classical and two-stage precontrol and control charts.  The setup qualification and sampling frequency rules of classical precontrol are retained.  The run-time evaluation rules are the same as those used in two-stage precontrol.  Precontrol zone definitions are adapted from Shewhart’s control limits.  Green zone boundaries are defined by ±1.5σ (standard deviations of process performance), while yellow zones occupy the remaining tolerance range (±3σ).  This version negates one of the key advantages of classical precontrol – namely, no calculations required for evaluation – but the resulting sensitivity may be needed in some circumstances.  With increased sensitivity, however, comes a higher rate of false alarms (Type I error) that prompt adjustments that may be unnecessary.
     While other modification schemes exist, a thorough treatment is not the objective of this presentation.  If one of the formulations outlined above does not suit your needs, the presentation should suffice as an introduction to possible modifications.  To find a more suitable process control method, the cited references, or other sources, can be used for further research.
     As stated at the outset, charts are not required to implement any of the formulations of precontrol described above.  However, a precontrol chart can be a useful addition to the basic tool.  Charting provides historical data that can be applied to process improvement efforts or to detect excessively frequent adjustments, called tampering.  A precontrol chart can also provide an indication of operators’ effectiveness in making adjustments or the need for additional training.
     The final note to be made is less a modification than a recommendation.  The previous discussion of precontrol has been based on its application to process outputs.  While this is useful, the power of precontrol is maximized when it is applied to process inputs whenever practical.  This proactive approach can prevent high input variability from negatively effecting process output, reducing the number of samples, stoppages, and adjustments required to produce the demanded quantity of output.
 
 
     Despite its advantages, precontrol seems to incense some vocal advocates of SPC and control charting.  The criticisms of precontrol are not discussed her in detail for the following reasons:
  • A discussion of statistics beyond the scope of this presentation would be required.
  • Many of the criticisms are unfairly lodged, demonstrating the purveyors’ bias (or, more charitably, loyalty) toward their chosen process monitoring tool.
  • Precontrol is not presented as a substitute for SPC in all applications.
  • The criticisms are often more academic than pragmatic.
  • If precontrol helps your organization perform at the desired level, the criticisms are irrelevant.
Instead, modifications were presented to address some legitimate concerns with precontrol.  Other alternatives are also available; consult the references or other sources for more information.
     The decision to implement precontrol, in any configuration, or any of the alternatives, requires careful consideration of the process, operators, and customers.  Implementing an inappropriate or ineffective process control method can damage the credibility of all future efforts.  This may lead to process tampering or, on the opposite end of the spectrum, neglect.  The application must be monitored to ensure the system supports the ongoing effectiveness of operators in maintaining required quality and productivity levels.
 

     Contact JayWink Solutions for assistance in evaluating processes, establishing a precontrol system, training, or other process monitoring, control, and improvement needs.
 
     For a directory of “The War on Error” volumes on “The Third Degree,” see “Vol. I:  Welcome to the Army.”
 
References
[Link] “Strategies for Technical Problem Solving.”  Richard D. Shainin; Quality Engineering, 1993.
[Link] “An Overview of the Shainin SystemTM for Quality Improvement.”  Stefan H. Steiner, R. Jock MacKay, and John S. Ramberg;  Quality Engineering, 2008.
[Link] “Precontrol.”  Wikilean.
[Link] “Pre-Control: No Substitute for Statistical Process Control.”  Steven Wachs; WinSPC.com.
[Link] “The Power of PRE-Control.”  Hemant P. Urdhwareshe; Symphony Technologies.
[Link] “Pre-Control May be the Solution.”  Jim L. Smith; Quality Magazine, September 2, 2009.
[Link] “Using Control Charts or Pre-control Charts.”  Carl Berardinelli; iSixSigma.
[Link] “The theory of ‘Pre-Control’: a serious method or a colourful naivity?”  N. Logothetis; Total Quality Management, Vol 1, No 2, 1990.
[Link] “Precontrol.”  Beverly Daniels and Tim Cowie; IDEXX Laboratories, 2008.
[Link] “Shewhart Charts & Pre-Control:  Rivals or Teammates?”  Tripp Martin; ASQC Statistics Division Newsletter, Vol 13, No 3, 1992.
[Link] “Pre-control and Some Simple Alternatives.”  Stefan H. Steiner; Quality Engineering, 1997.
[Link] “Pre-control versus  and R Charting:  Continuous or Immediate Improvement?”  Dorian Shainin and Peter Shainin; Quality Engineering, 1989.
[Link] World Class Quality.  Keki R. Bhote; American Management Association, 1991.

 
Jody W. Phelps, MSc, PMP®, MBA
Principal Consultant
JayWink Solutions, LLC
jody@jaywink.com
]]>
<![CDATA[The War on Error – Vol. VII:  The Shainin System]]>Wed, 23 Sep 2020 14:30:00 GMThttp://jaywinksolutions.com/blog/the-war-on-error-vol-vii-the-shainin-system     Lesser known than Six Sigma, but no less valuable, the Shainin System is a structured program for problem solving, variation reduction, and quality improvement.  While there are similarities between these two systems, some key characteristics lie in stark contrast.
     This installment of “The War on Error” introduces the Shainin System, providing background information and a description of its structure.  Some common problem-solving tools will also be described.  Finally, a discussion of the relationship between the Shainin System and Six Sigma will be presented, allowing readers to evaluate the potential for implementation of each in their organizations.
Origins of the Shainin System
     Development of the Shainin System began in the 1940s while Dorian Shainin worked with aeronautics companies to resolve production issues.  In the intervening decades, the system has evolved, with additional tools developed, through Shainin’s experience with numerous operations and a vast range of quality and reliability problems.
     Use of the Shainin System to improve process performance is based upon the following tenets:
  • A single root cause is the source of the greatest amount of variation (the “Red X”); it may be an independent variable or the interaction effects of multiple variables.  The “Pink X” and “Pale Pink X” are the second and third largest contributors to variation, respectively.
  • Breakthrough improvements in process performance are achieved only by identifying and controlling the Red X.  The Pink X and Pale Pink X may also require improvement.  The Red X is critical because the output varies as a function of the squares of the input variations, according to the following relation:
  • Random variation does not exist.  Several input factors (Xs) affect the variation in output (Y).  With sufficient resources, all Xs could be identified.
     The Shainin System is also known by a more descriptive moniker, “Red X Problem Solving,” owing to its focus on the single largest contributor to process variation.  It has been developed to be statistically rigorous, while keeping complex statistics in the background.  This expands the use of tools from highly-trained statisticians to operators and engineers of all skill levels.
     Red X Problem Solving is best suited for process development projects with the following characteristics:
  • medium- to high-volume process,
  • data are readily available,
  • statistical methods are in common use, and
  • process intervention is difficult.
 
The FACTUAL Framework
     Much like Six Sigma, the Shainin System uses an acronym to guide application of its methodology.  Fortunately, this one is easy to pronounce!  The FACTUAL framework is comprised of the steps Focus, Approach, Converge, Test, Understand, Apply, and Leverage.
     Each component of the FACTUAL framework is presented in a summary table below.  The top cell of each table contains a description of activities associated with that step.  The lower left cell contains the deliverables expected to result from those activities.  The lower right cell lists examples of tools commonly used to achieve the objectives of each step.  Those designated with a “*” are discussed in the next section.
     The FACTUAL framework exhibits a similar flow to DMAIC – problem definition → data collection → analysis → implementation → monitoring – but the Shainin System takes much greater advantage of graphical analysis than does Six Sigma.  Use of graphical tools facilitates implementation by non-mathematicians and expedites identification of the Red X.  The Shainin System toolbox is the topic to which we now turn.
 
Tools of the Shainin System
     More than 20 tools have been developed as part of the Shainin System, using a foundation of rigorous statistics.  The tools are designed to be intuitive and easy to use by practitioners without extensive statistical training.  Many are unique, while a few have been adapted or expanded for use within the Shainin System.  The descriptions below only preview the tools available.  Consult the references cited, or other sources, for additional information.  Links to additional information are provided to assist this effort.
 
 [Focus]
5W2H:  5W2H is an abbreviation for seven generic questions that can be used to define the scope of a problem.  The 5 Ws are Who? What? Where? When? and Why?  The 2 Hs are How? and How much?  As simple as this “tool” is, it must be used with caution.  If the answers to these questions are perceived as accusatory, the improvement effort may lose the support of critical team members, hindering identification of the Red X and implementation of a solution.
Links to additional information on tools used in the Focus step:
Eight Disciplines (8D)
 
 [Approach]
Isoplot:  An Isoplot is used to evaluate the reliability of a measurement system; it is an alternative method to variable gauge R&R.  To construct and interpret an Isoplot (see Exhibit 1):
1) Measure the characteristic of interest on each of 30 pieces.  Repeat the measurement on each piece.
2) Plot the pairs of measurements; the first measurement on the horizontal axis, the second on the vertical axis.
3) Draw a 45° line (slope = 1) through the scatter plot created in step 2.
4) Draw two lines, each parallel to the 45° line, that bound all data points.
5) Determine the perpendicular distance between the lines drawn in step 4.  This distance represents the measurement system variation, ΔM.
6) Determine the process variation, ΔP.  This is the range of measurements along the horizontal axis.
7) Calculate the Discrimination Ratio:  ΔP/ΔM.
8) If (ΔP/ΔM) ≥ 6, accept the measurement system; otherwise, seek improvements before using the measurement system in the search for the Red X.
 [Converge]
Component Search:  This technique can be called, simply, “part swapping.”  By exchanging components in an assembly and tracking which combinations perform acceptably and which do not, the source of the problem can be traced to a single component or combination of components.  To conduct a Component Search, select a set of the best-performing assemblies (BoB, “best of the best”), and an equal number of the worst-performing (WoW, “worst of the worst”).  As parts are swapped, poor performance will follow the “problem parts.”
Multivari Analysis:  A multivari chart is a type of run chart that displays multiple measurements within samples over time.  For example, a characteristic is measured in multiple locations on each part.  Samples consist of consecutively-produced parts, collected at regular intervals.  As can be seen in Exhibit 2, plotting these data allows rapid evaluation of positional (within-part), cyclical (part-to-part), and temporal (time-to-time) variations.
Links to additional information on tools used in the Converge step:
Concentration Diagram
Group Comparison
Solution Tree – example shown in project context
 
 [Test]
B vs. C:  Comparing the current process (C) with a proposed “better” process (B) develops confidence that the new process will truly perform better than the existing one.  Data collected while running the B process should indicate improved performance compared to C.  When C is again operated, performance should return to the previous, unsatisfactory level.  If process performance cannot be predictably controlled in this fashion, further investigation is required.  Several iterations may be required to sufficiently demonstrate that the new process, B, outperforms the existing process, C.
Variable Search:  Conducted in similar fashion to Component Search, Variable Search is employed with five or more suspect variables.  “High” and “low” values are chosen for each variable, analogous to “good” and “bad” parts.  Comparisons of performance are then made, varying one suspect variable at a time, to incriminate or eliminate each variable.  If a characteristic is “the same” in “good” and “bad” assemblies, it is unlikely to be a significant contributor to the issue experienced.
Links to additional information on tools used in the Test step:
ANOVA on Ranks
Full Factorial Experiments or Full Factorial Example
 
 [Understand]
Tolerance Parallelogram:  Realistic tolerances to be used for the Red X variable to achieve Green Y output can be determined using a Realistic Tolerance Parallelogram Plot.  To construct and interpret a Tolerance Parallelogram (see Exhibit 3):
1) Plot Red X vs. Green Y for 30 random parts.
2) Draw a median line, or regression line, through the data.
3) Draw two lines, parallel to the median line, that bound “all but one and a half” of the data points.  The vertical distance between these two lines represents 95% of the output variation caused by factors other than the variable plotted.  If this distance is large, the variable may not be as significant as suspected.  Perhaps it is a Pink X or Pale Pink X instead of the Red X.
4) Draw horizontal lines at the upper and lower limits of the customer requirement for the output (Y).
5) From the intersection of the lower customer requirement line and the lower parallelogram boundary, draw a vertical line to the horizontal axis.
6) From the intersection of the upper customer requirement line and the upper parallelogram boundary, draw a vertical line to the horizontal axis.
7) The intercepts of the lines drawn in steps 5 and 6 with the horizontal axis represent the realistic tolerance limits for input X to achieve Green Y output.
Links to additional information on tools used in the Understand step:
Tolerance Analysis
 
 [Apply]
Lot Plot:  A Lot Plot can be used to determine if a lot of parts should be accepted, sorted, or investigated further.  To construct and interpret a Lot Plot:
1) Measure 50 pieces, in 10 subgroups of 5 pieces each, from a single lot.
2) Create a histogram, with 7 – 16 divisions, of all 50 measurements.
3) From the histogram, determine the type of distribution (normal, bimodal, non-normal).
4) Calculate the average (Xbar) and range (R) of measurements in each subgroup.
5) Calculate the average of the subgroup averages (Xdbl-bar) and the average of the subgroup ranges (Rbar).
6) Calculate upper and lower lot limits (ULL, LLL):
7) Evaluate acceptability of lot.  For normally-distributed data, use the following guidelines:
  • All sample data within specification limits → Accept.
  • ULL and LLL are within specification limits → Accept.
  • ULL or LLL is outside specification limits → Acceptance decision is made by Material Review Board.
For non-normally-distributed data, use the following guidelines:
  • ULL and LLL are within specification limits → Investigate non-normality or repeat sampling.
  • ULL and/or LLL is outside specification limits → Sort.
For bimodally-distributed data, investigate the cause of bimodality or repeat sampling.
Links to additional information on tools used in the Apply step:
Precontrol
 
 [Leverage]
To leverage what is learned from an improvement project, use of several tools may be repeated.  For example, similar processes may have a common Red X influencing their performance.  The tolerances required to maintain acceptable output from each, however, may be very different.  Each will require its own tolerance parallelogram.  In general, leveraging knowledge gained in one project does not preclude thorough analysis in another.
FMEA:  Though not a component of the Shainin System per se, a thoughtfully-constructed and well-maintained Failure Modes and Effects Analysis is a rich source of ideas for “leverage targets.”  For example, all products or processes that share a failure mode could potentially reap significant gains by replicating the improvements validated in a single project.
 
            The selection of tools at each stage of a project will depend on the specific situation.  A problem’s complexity (i.e. the number of variables involved) or the ability to disassemble and reassemble products without damage are examples of circumstances that will guide the use of tools.  Understanding the appropriate application of each is critical to project success.
 
The Shainin System and Six Sigma
            Vehement advocates of the Shainin System or Six Sigma often frame the relationship as an either/or proposition.  This adversarial stance is counterproductive.  Each framework provides a useful problem-solving structure and a powerful set of tools.  There is nothing to suggest that they are – or should be – mutually exclusive.
            Six Sigma is steeped in statistical analysis, while the Shainin System prefers to exploit empirical investigations.  Six Sigma traces a problem from input to output (X → Y), while the Shainin System employs a Y → X approach.  Six Sigma requires training specialists, while the Shainin System aims to put the tools to use on the shop floor.
            Despite their differences, these systems share common objectives, namely, process improvement and customer satisfaction.  In the shared objectives lies the potential for a cohesive, unified approach to problem-solving that capitalizes on the strengths of both frameworks.  One such unification proposes using Shainin tools within a DMAIC project structure (Exhibit 4).  Six Sigma tools could also be used to support a FACTUAL problem-solving effort.  Both frameworks are structured around diagnostic and remedial journeys, further supporting the view that they are complements rather than alternatives.
     The hierarchical structure of “Red X Problem Solvers” is another example of a similarity to Six Sigma that also highlights a contrast.  Students of the Shainin System begin as Red X Apprentices and advance to become Journeymen.  These titles have strong “blue-collar” connotations and are familiar to most “shop floor” personnel that are encouraged to learn and apply the tools.  Like Six Sigma, the Shainin System also has a designation for those with substantial experience that have been trained to coach others – the Red X Master.
 
 
     One need not have a sanctioned title to begin learning and applying useful tools.  Readers are encouraged to consult the references and links to learn more.  Also, JayWink Solutions is available for problem-solving assistance, training development and delivery, project selection assistance, and other operations-related needs.  Contact us for an assessment of our partnership potential.
 
For a directory of “The War on Error” volumes on “The Third Degree,” see “Vol. I:  Welcome to the Army.”
 
References
[Link] “Strategies for Technical Problem Solving.”  Richard D. Shainin; Quality Engineering, 1993.
[Link] “An Overview of the Shainin SystemTM for Quality Improvement.”  Stefan H. Steiner, R. Jock MacKay, and John S. Ramberg;  Quality Engineering, 2008.
[Link] “Introduction to the Shainin method.”  Wikilean.
[Link] “Shainin and Six Sigma.”  Shainin The Red X Company.
[Link] “Using Shainin DOE for Six Sigma: an Indian case study.”  Anupama Prashar; Production Planning & Control, 2016.
[Link] “Developing Effective Red X Problem Solvers.”  Richard D. Shainin; Shainin The Red X Company, April 18, 2018.
[Link] “Shainin Methodology:  An Alternative or an Effective Complement to Six Sigma?”  Jan Kosina; Quality Innovation Prosperity, 2015.
[Link] “The Role of Statistical Design of Experiments in Six Sigma:  Perspectives of a Practitioner.”  T. N. Goh; Quality Engineering, 2002.
[Link] World Class Quality.  Keki R. Bhote; American Management Association, 1991.
[Link] “Training for Shainin’s approach to experimental design using a catapult.”  Jiju Antony and Alfred Ho Yuen Cheng; Journal of European Industrial Training, 2003.

 
Jody W. Phelps, MSc, PMP®, MBA
Principal Consultant
JayWink Solutions, LLC
jody@jaywink.com
]]>
<![CDATA[The War on Error – Vol. VI:  Six Sigma]]>Wed, 09 Sep 2020 14:30:00 GMThttp://jaywinksolutions.com/blog/the-war-on-error-vol-vi-six-sigma     Despite the ubiquity of corporate Six Sigma programs and the intensity of their promotion, it is not uncommon for graduates to enter industry with little exposure and less understanding of their administration or purpose.  Universities that offer Six Sigma instruction often do so as a separate certificate, unintegrated with any degree program.  Students are often unaware of the availability or the value of such a certificate.
     Upon entering industry, the tutelage of an invested and effective mentor is far from guaranteed.  This can curtail entry-level employees’ ability to contribute to company objectives, or even to understand the conversations taking place around them.  Without a structured introduction, these employees may struggle to succeed in their new workplace, while responsibility for failure is misplaced.
     This installment of “The War on Error” aims to provide an introduction sufficient to facilitate entry into a Six Sigma environment.  May it also serve as a refresher for those seeking reentry after a career change or hiatus.
Six Sigma Defined
     “Six Sigma” became a term of art in the 1980s when Motorola used the statistical concept as the foundation for a company-wide quality goal and problem-solving strategy.  Popularity of the methodology began to skyrocket in the 1990s, thanks in large part to General Electric’s very public adoption and energetic promotion.  Widespread adoption followed, leading to Six Sigma becoming the de facto standard for quality management.  It became an actual standard in 2011, with the International Organization for Standardization’s (ISO) release of “ISO 13053: Quantitative Methods in Process Improvement – Six Sigma.”
     The foundation of a Six Sigma program, and the origin of its name, is process performance monitoring.  Process output is assumed to be normally distributed with the mean (μ, mu) centered between the specification limits.  The distance from the mean to either specification limit is given in standard deviations (σ, sigma), a measure of process variation.  The goal is to manage processes such that this distance is (at least) six standard deviations, or 6σ, as shown in Exhibit 1.  The total width of the distribution between specification limits (USL – LSL) is then 12σ, resulting in approximately two defects per billion opportunities.
     Understanding that no process is perfectly stable with its output mean centered between its specification limits, the developers of the Six Sigma methodology accounted for the difference between short- and long-term performance.  Empirical evidence indicated that process means could be expected to shift up to 1.5σ from center when long-term operational data is analyzed.  This results in a 4.5σ distance from the mean to one of the specification limits and a higher reject rate.  The shift essentially eliminates defects from the further specification limit, yielding an anticipated reject rate one-half that of a “4.5σ process,” or 3.4 defects per million opportunities (DPMO).  This is the target reject rate quoted by Six Sigma practitioners.  Exhibit 2 presents the mean shift and resultant reject rate graphically.
     The true goal of any process manager is to achieve zero defects, however unrealistic this may be.  Six Sigma process control seeks to come as close to this target as is economically and technologically feasible.  It engenders vastly more aggressive objectives than “traditional” process control that typically employs μ ± 3σ specification limits.
 
     The term “Six Sigma” is an umbrella covering two key methodologies, each with a unique application and purpose.  The DMAIC (“duh-MAY-ik”) methodology is used for process improvement; the most frequently employed methodology, users often use “Six Sigma” and “DMAIC” interchangeably.  The DMADV (“duh-MAD-vee”) methodology is used for product or process design.  A description of each, including the five-step process that forms its acronym and some example tools used during each, follows.
 
DMAIC – for Process Improvement
     Existing processes that underperform can be improved using the most common Six Sigma methodology, DMAIC.  The acronym is derived from the five steps that comprise this problem-solving process:  Define, Measure, Analyze, Improve, and Control.  Exhibit 3 presents a basic flowchart of the DMAIC process.  Note that the first three steps are iterative; measurement and analysis may reveal the need for redefinition of the problem situation.
     For brevity, each phase of DMAIC is presented in a summary table.  The top cell of each table contains a description of the phase activities and purpose.  The lower left cell contains typical outputs, or deliverables, of that phase.  These items will be reviewed during a “phase-gate” or similar style review and must be approved to obtain authorization to proceed to the next phase.  The lower right cell lists examples of tools commonly used during this phase.  Full descriptions of the tools will not be provided, however.  Readers should consult the references cited in this post, or other sources, for detailed information on the use of the tools mentioned, as well as other available tools.
     As shown in Exhibit 3, a review should be conducted at this point in the process to verify that the problem definition remains accurate and sufficient.  If adjustments are needed, return to the Define phase to restate the problem situation.  Modify, append, or repeat the Measure phase, as necessary, and analyze any new data collected.  Repeat this cycle until measurement and analysis support the problem definition.

     Though lessons-learned activity and replication are focused on processes other than that which was the subject of the Six Sigma project, they are included in the Control phase discussion for two key reasons:
1) They are vitally important to maximizing the return on investment by limiting the amount of redundant work required for additional processes to capitalize on the knowledge gained during the project.
2) They take place at the conclusion of a project; therefore, the Control phase discussion is the most appropriate to append with these activities.
     Some descriptions of the DMAIC methodology will include lessons learned and replication as additional steps or follow-up; others make no mention of these valuable activities.  To minimize confusion and encourage standardization of best practices, they are considered elements of the Control phase for purposes of our discussion.
 
DMADV – for Product or Process Design
     It will be evident in the following discussion that there are many parallels between DMAIC and DMADV.  The five steps that comprise DMADV are Define, Measure, Analyze, Design, and Validate.  The first three steps, it may be noted, have the same names as those in DMAIC, but their execution differs because each process has its own purpose and objectives.  Overall execution of DMADV, however, closely parallels DMAIC, as can be seen by comparing the flowchart of DMADV, presented in Exhibit 4, with that of DMAIC in Exhibit 3.
     The fundamental difference between DMAIC and DMADV is that DMADV is proactive while DMAIC is reactive.  Another way to think of this distinction is that DMAIC is concerned with problems, while DMADV is focused on opportunities.  Though other “acronymed” approaches to proactive analysis exist, DMADV is the predominant methodology.  For this reason, it is frequently used interchangeably with the umbrella term Design for Six Sigma (DFSS), as will be done here [DMADV doesn’t roll off the tongue quite so eloquently as DMAIC or DFSS (“dee-eff-ess-ess”)].
     The phases of DMADV are presented in summary tables below.  Like the DMAIC summaries, the lists are not exhaustive; additional information can be found in the references cited or other sources.
     Review results of analyses with respect to the opportunity definition.  If revisions are needed, return to the Define phase and iterate as necessary.
     Though an organization’s efforts are most effective when the inclination is toward proactive behavior, or preventive measures, DFSS is in much less common use than DMAIC.  The lingering bias toward reactive solutions is reflected in the greater quantity and quality of resources discussing DMAIC; DFSS is often treated as an afterthought, if it is mentioned at all.  This provides a significant opportunity for any organization willing to expend the effort to execute a more thorough development process prior to launch.  A proactive organization can ramp up and innovate, satisfying customers’ evolving demands, while reactive competitors struggle with problems that were designed into their operations.
 
Belts and Other Roles
     Perhaps the most visible aspect of Six Sigma programs is the use of a martial arts-inspired “belt” system.  Each color of belt is intended to signify a corresponding level of expertise in the use of Six Sigma tools for process improvement.  The four main belts in a Six Sigma program are Yellow, Green, Black, and Master Black.  Other colors are sometimes referenced, but their significance is not universally accepted; therefore, they are excluded from this discussion.  Responsibilities of the belted and other important roles are described below.
  • Yellow Belt (YB):  “Front-line” employees are often yellow belts, including production operators in manufacturing and service providers in direct contact with customers.  Yellow belts typically perform the operations being studied and collect the required data under the supervision of a green or black belt.  As project team members, yellow belts provide critical insight into the process under review, suggest relevant test conditions, and offer potential improvements in addition to their regular responsibilities.
  • Green Belt (GB):  Green belts lead projects, guiding yellow belts in improvement efforts; they are also team members for larger projects led by black belts.  Green belts are often process engineers and production supervisors, affording them knowledge of the process under review and responsibility for its performance; projects are undertaken in addition to day-to-day operational duties.  The varied role of a green belt may require data collection, analysis, problem solving and solution development, training, and more.
  • Black Belt (BB):  Black belts are responsible for delivering value to the organization through the execution of Six Sigma projects.  As such, they are typically dedicated to the Six Sigma program with no peripheral responsibilities.  A black belt acts as a project leader, coach and mentor to green belts, resource coordinator to facilitate cross-functional teamwork, and presenter to report progress and gain approval to proceed at phase-gate reviews.
  • Master Black Belt (MBB):  Master black belts provide support to the entire Six Sigma program, from training and mentoring black and green belts to overseeing company-wide multi-site improvement initiatives.  MBBs also provide support in the selection and assignment of projects that are appropriate for a black or green belt, assist in the use of advanced statistics or other tools, identify training needs, and deliver the required training.  Smaller organizations, or those with nascent programs, may rely on external resources to fill this role.
     Though the presentation of material will likely differ among certifying organizations, the definition of responsibilities and required abilities for each belt are mostly consistent.  Standard competency requirements for a number of skills are summarized in Exhibit 5.
     The belts in an organization are directly responsible for executing Six Sigma improvement projects. To be successful, they need the support of other essential roles in the program.  These are described below.
  • Project Sponsor:  A project sponsor supports improvement projects undertaken in his/her area of responsibility by providing the necessary resources and removing barriers to execution.  These responsibilities require a certain level of authority; the project sponsor is typically a “process owner,” such as a production manager.  The sponsor participates in all phase-gate reviews and provides approval for the project to proceed.  S/he monitors the project to ensure its timely completion and evaluates it for potential replication elsewhere in the organization.
  • Deployment Manager:  The deployment manager administers the Six Sigma program.  This includes managing the number of belts in the organization and coordinating their assignments with their functional managers.  The deployment manager is also responsible for any facility resources dedicated to the program.
  • Champion:  A Six Sigma champion is typically a high-ranking, influential member of the quality function in the organization.  The champion is the chief promoter of the Six Sigma initiative within the organization, establishing the deployment strategy.  The champion also defines and advocates for business objectives to be achieved with Six Sigma.
     Every member of an organization contributes to the success or failure of Six Sigma initiatives, whether or not they have been given one of the titles described above.  Each person has the ability to aid or hinder efforts made by others.  Effective communication throughout the organization is critical to the success of a new Six Sigma program.  Explaining the benefits to the organization and to individuals can turn skeptics into supporters.  The more advocates a program has, the greater its chance of success.
 
Additional Considerations
     There are three important caveats offered here.  The first is common in many contexts – launching a Six Sigma program does not ensure success. Put another way, desire does not guarantee ability.  A successful program requires the development of various disparate skills.  “Expert-level” skills in statistical analysis, for example, provides no indication of the ability to develop and implement creative and innovative solutions.
     Second, achieving six sigma performance has the potential to be a Pyrrhic victory.  That is, a misguided effort can be worse than no effort at all.  Analysis failures that lead to poorly-chosen objectives can divert resources from the most useful projects, causing financial performance to continue to decline while reports indicate improving process performance.  Many organizations have abandoned their Six Sigma programs as administration costs exceed the gains achieved.
     The third caveat is the “opposite side of the coin” from the first.  Any individual interested in improving process performance or product design need not delay for lack of a “belt.”  Certification does not guarantee success (caveat #1) and lack of certification does not suggest imminent failure.  This introductory post, other installments of “The Third Degree,” past and future, and various other resources can guide your improvement efforts and development journey.  No specialized, status-signaling attire is required.
 
     This installment of “The War on Error” series was written with two basic goals:
1) provide an introduction that will allow those without experience or formal training to understand and participate in conversations that take place in Six Sigma environments, and
2) provide a list of tools accessible to beginners to be used as an informal development plan.
Readers for which the first goal was achieved are encouraged to take full advantage of the second.  Your development is your responsibility; do not wait to be invited to the “belt club.”
 
     JayWink Solutions is available for training plan development and delivery, project selection and execution assistance, and general problem-solving.  Contact us for an assessment of how we can help your organization reach its goals.
 
     For a directory of “The War on Error” volumes on “The Third Degree,” see “Vol. I:  Welcome to the Army.”
 
References
[Link] “ISO 13053: Quantitative Methods in Process Improvement – Six Sigma – Part 1:  DMAIC Methodology.”  ISO, 2011.
[Link] “ISO 13053: Quantitative Methods in Process Improvement – Six Sigma – Part 2:  Tools and Techniques.”  ISO, 2011.
[Link] “Six Sigma,” “DMAIC,” and “Design for Six Sigma.”  Wikipedia.
[Link] “Integrating the Many Facets of Six Sigma.”  Jeroen de Mast; Quality Engineering, 2007.
[Link] “The Role of Statistical Design of Experiments in Six Sigma:  Perspectives of a Practitioner.”  T. N. Goh; Quality Engineering, 2002.
[Link] “Six Sigma Fundamentals:  DMAIC vs. DMADV.”  Six Sigma Daily, June 17, 2014.
[Link] “DMADV – Another SIX SIGMA Methodology.”  What is Six Sigma?
[Link] “Six Sigma Belts.”  Jesse Allred; Lean Challenge, February 18, 2019.
[Link] “Six Sigma Belts and Their Meaning.”  Tony Ferraro; 5S Today, August 22, 2013.
[Link] The Six Sigma Memory Jogger II.  Michael Brassard, Lynda Finn, Dana Ginn, Diane Ritter; GOAL/QPC, 2002.
[Link] The New Lean pocket Guide XL.  Don Tapping; MCS Media, Inc., 2006.
[Link] Creating Quality.  William J. Kolarik; McGraw-Hill, Inc., 1995.

 
Jody W. Phelps, MSc, PMP®, MBA
Principal Consultant
JayWink Solutions, LLC
jody@jaywink.com
]]>
<![CDATA[The War on Error – Vol. V:  Get Some R&R - Attributes]]>Wed, 26 Aug 2020 14:30:00 GMThttp://jaywinksolutions.com/blog/the-war-on-error-vol-v-get-some-rr-attributes     While Vol. IV focused on variable gauge performance, this installment of “The War on Error” presents the study of attribute gauges.  Requiring the judgment of human appraisers adds a layer of nuance to attribute assessment.  Although we refer to attribute gauges, assessment may be made exclusively by the human senses.  Thus, analysis of attribute gauges may be less intuitive or straightforward than that of their variable counterparts.
     Conducting attribute gauge studies is similar to variable gauge R&R studies.  The key difference is in data collection – rather than a continuum of numeric values, attributes are evaluated with respect to a small number of discrete categories.  Categorization can be as simple as pass/fail; it may also involve grading a feature relative to a “stepped” scale.  The scale could contain several gradations of color, transparency, or other visual characteristic.  It could also be graded according to subjective assessments of fit or other performance characteristic.
     Before the detailed discussion of attribute gauge studies begins, we should clarify why subjective assessments are used.  The most obvious reason is that no variable measurement method or apparatus exists to evaluate a feature of interest.  However, there are some variable measurements that are replaced by attribute gauges for convenience.  Variable gauges that are prohibitively expensive, operate at insufficient rates, or are otherwise impractical in a production setting, are often supplanted by less sophisticated attribute gauges.  Sophisticated equipment may be used to create assessment “masters” or to validate subjective assessments, while simple tools are used to maintain productivity.  Periodic verification ensures that quality is not sacrificed to achieve desired production volumes.
     Although direct calculations of attribute gauge repeatability and reproducibility may not be practical without the use of a software package, the assessments described below are analogous.  The “proxy” values calculated provide sufficient insight to develop confidence in the measurement system and to identify opportunities to improve its performance.
     Like variable gauge R&R studies, evaluation of attribute gauges can take various forms.  The presentation below is not comprehensive, but an introduction to common techniques.  Both types of analysis require similar groundwork to be effective; readers may want to review the “Preparation for a GRR Study” section in Vol. IV before continuing.

Attribute Short Study
     The “short method” of attribute gauge R&R study requires two appraisers to evaluate each of (typically) 30 parts twice.  This presentation uses a binary decision – “accept” or “reject” – to demonstrate the study method, though more categories could be used.  Exhibit 1 presents one possible format of a data collection and computation form.  Use of the form is described below, referencing the number bubbles in Exhibit 1.
1:  Each appraiser’s evaluations are recorded as they are completed, in random order, in the columns labeled “Trial 1” and “Trial 2.”  Simple notation, such as “A” for “accept” and “R” for “reject” is recommended to simplify data collection and keep the form neat and legible.
2:  Decisions for each part are compared to determine the consistency of each appraiser’s evaluations.  If an appraiser reached the same conclusion both times s/he evaluated a part, a “1” is entered in that appraiser’s consistency column in the row corresponding to that part.  If different conclusions were reached, a “0” is recorded.  Below the trial data entries, the number of consistent evaluation pairs is tallied.  The proportion of parts receiving consistent evaluations is then calculated and displayed as a percentage.
3:  The standard evaluation result is recorded for each part.  The standard can be determined via measurement equipment, “expert” evaluation, or other trusted method.  The standard must be unknown to the appraisers during the study.
4:  The first two “Agreement” columns record where each appraiser’s evaluation matches the standard (enter “1”).  A part’s evaluation can only be scored a “1” for agreeing with the standard if the appraiser reached the same conclusion in both trials.  Put another way, for A=Std = 1, A=A must equal 1; if A=A = 0, A=Std is automatically ”0.”  Column results are totaled and percentages calculated.
5:  Place a “1” in the “A/B” Agreement column for each part with consistent appraiser evaluations (A=A = 1 and B=B = 1) that match.  If either appraiser in inconsistent (A=A = 0 or B=B = 0) or the two appraisers’ evaluations do not match, enter “0” in this column.  Total the column and calculate the percentage of matching evaluations.
6:  The final column records the instances when both appraisers were consistent, in agreement with each other, and in agreement with the standard.  For each part that obtained these results, a “1” is entered in this column; all others receive a “0.”  Again, total the column results and calculate the percentage of parts that meet the criteria.

     The percentage values calculated in Exhibit 1 are proxy values; the objectives are inverted compared to variable gauge R&R studies.  That is, higher values are desirable in attribute gauge short studies.
     The consistency values (A=A, B=B) are analogous to repeatability in variable gauge studies.  Appraiser’s results could be combined to attain a single value to more-closely parallel a variable study; however, this is not a standard.  Caution must be exercised in its use; explicit explanation of its calculation and interpretation must accompany any citation to prevent confusion or misuse.
     A/B Agreement (A=B) is analogous to variable gauge reproducibility; it is an indication of how well-developed the measurement system is.  Better-developed attribute systems will produce more matching results among appraisers, just as is the case with variable systems.
     The composite value in the attribute study, analogous to variable system R&R, is Total Agreement (A=B=Std).  This value reflects the measurement system’s ability to consistently obtain the “correct” result over time when multiple appraisers are employed.
     While the calculations and interpretations are quite different from a variable R&R study, the insight gained from an attribute short study is quite similar.  The results will aid the identification of improvement opportunities, whether in appraiser training, refining instructions, clarifying acceptance standards, or equipment upgrades.  The attribute short study is an excellent starting point for evaluating systems that, historically, had not received sufficient attention to instill confidence in users and customers.

Effectiveness and Error Rates
     Perhaps even shorter than the short study described above, measurement system effectiveness can be calculated to provide shallow, but rapid, insight into system performance.  Measurement system effectiveness is defined as:
Effectiveness assessments are typically accompanied by calculations of miss rates and false alarm rates.  Although all of these values are defined in terms of a measurement system, they are calculated per appraiser.
     An appraiser’s miss rate represents his/her Type II error and is calculated as follows:
The false alarm rate represents an appraiser’s Type I error; it is calculated as follows:
     Appraisers’ performance on each metric are compared to threshold values – and to each other – to assess overall measurement system performance.  One set of thresholds used for this assessment is presented in Exhibit 2.  These guidelines are not statistically derived; they are empirical results.  As such, users may choose to modify these thresholds to suit their needs.
     Identification of improvement opportunities requires review of each appraiser’s results independently and in the aggregate.  For example, low effectiveness may indicate that an appraiser requires remedial training.  However, if all appraisers demonstrate low effectiveness, a problem more deeply-rooted in the measurement system is possibly the cause.  This type of discovery is only possible when both levels of review are conducted.  More sophisticated investigations may be required to identify specific issues and opportunities.

Cohen’s Kappa
     Cohen’s Kappa, often called simply “kappa,” is a measure of agreement between two appraisers of attributes.  It accounts for agreement due to chance to assess the true consistency of evaluations among appraisers.  A data summary table is presented in Exhibit 3 for the case of two appraisers, three categories, and one trial.  The notation used is as follows:
  • A1B2 = the number of parts that appraiser A placed in category 1 and appraiser B placed in category 2.
  • (A=B)1 = the number of parts that appraisers A and B agree belong in category 1.
Total agreement between appraisers, including that due to chance, is
where n = ∑Rows = ∑Cols = the total number of evaluation comparisons.  In order to subtract the agreement due to chance from total agreement, the agreement due to chance for each categorization is calculated and summed:
where Pi is the proportion of agreements and ci is the number of agreements due to chance in the ith category.  Therefore, ∑Pi is the proportion of agreements in the entire study due to chance, or pε.  Likewise, ∑ci is the total number of agreements in the entire study that are due to chance, or nε.
            To find kappa, use
where pa and na are the proportion and number, respectively, of appraiser agreements.  To validate the kappa calculation, confirm that κ ≤ pa ≤ 1.  Also, 0 ≤ κ ≤ 1 is a typical requirement.  A kappa value of 1 indicates “perfect” agreement, or reproducibility, between appraisers, while κ = 0 indicates no agreement whatsoever beyond that due to chance.  Discussion of negative values of kappa, allowed in some software, is beyond the scope of this presentation.
     Acceptance criteria within the 0 ≤ κ ≤ 1 range varies by source.  Minimum acceptability is typically placed in the 0.70 – 0.75 range, while κ > 0.90 is desirable.  If you prefer percentage notation, κ > 90% is your ultimate goal.  Irrespective of specific threshold values, a higher value of kappa indicates a more consistent measurement system.  Note, however, that Cohen’s Kappa makes no reference to standards; therefore, evaluation of a measurement system by this method is incomplete.

Analytic Method
     Earning its shorthand title of “long method” of attribute gauge R&R study, the analytic method uses a non-fixed number of samples, known reference values, probability plots, and statistical lookup tables.  A quintessential example of this technique’s application is the validation of an accept/reject apparatus (e.g. Go/NoGo plug gauge) used in production because it is faster and more robust than a more precise instrument (e.g. bore gauge).
     Data collection begins with the precise measurement of eight samples to obtain reference values for each.  Each part is then evaluated twenty times (m = 20) with the attribute gauge; the number of times each sample is accepted is recorded.  Results for the eight samples must meet the following criteria:
  • part with the smallest reference value accepted zero times (a = 0),
  • part with the largest reference value accepted twenty times (a = 20),
  • remaining parts accepted between one and nineteen times (1 ≤ a ≤ 19).
If the requirements are not met, the process should be repeated with additional samples until all criteria have been satisfied.  When adequate data have been collected, calculate the probability of acceptance for each part according to the following formula:
     From the probability calculations, a Gauge Performance Curve (GPC) is generated; the format shown in Exhibit 4 may be convenient for presentation.  However, the preferred option, for purposes of calculation, is to create the GPC on normal probability paper, as shown in Exhibit 5.  The eight (or more) data points are plotted and a best-fit line drawn through the data.  The reference values plotted in Exhibits 4 and 5 are deviations from nominal, an acceptable alternative to the actual measurement value.
     Measurement system bias can now be determined from the GPC as follows:
where Xt is the reference value at the prescribed probability of acceptance.
     Measurement system repeatability is calculated as follows:
where 1.08 is an adjustment factor used when m = 20.
     Significance of the bias is evaluated by calculating the t-statistic,
and comparing it to t0.025,df.  For this case, df = m -1 = 19 and t0.025,19 = 2.093, as found in the lookup table in Exhibit 6.  If t > 2.093, the measurement system exhibits significant bias; potential corrective actions should be considered.
     Like the previous methods described, the analytic method does not provide R&R results in the same way that a variable gauge study does.  It does, however, provide powerful insight into attribute gauge performance.  One advantage of the long study is the predictive ability of the GPC.  The best-fit line provides an estimate of the probability of acceptance of a part with any reference value in or near the expected range of variation.  From this, a risk profile can be generated, focusing improvement efforts on projects with the greatest expected value.

     Other methods of attribute gauge performance assessment are available, including variations and extensions of those presented here.  The techniques described are appropriate to new analysts, or for measurement systems that have been subject to no previous assessment, and can serve as stepping stones to more sophisticated investigations as experience is gained and “low-hanging fruit” is harvested.

     JayWink Solutions is available to assist you and your organization with quality and operational challenges.  Contact us for an independent review of your situation and action plan proposal.

     For a directory of “The War on Error” volumes on “The Third Degree,” see “Vol. I:  Welcome to the Army.”
 
References
[Link] “Measurement Systems Analysis,” 3ed.  Automotive Industry Action Group, 2002.
[Link] “Conducting a Gage R&R.”  Jorge G. Tavera Sainz; Six Sigma Forum Magazine, February 2013.
[Link] “Introduction to the Gage R & R.”  Wikilean.
[Link] “Attribute Gage R&R.”  Samuel E. Windsor; Six Sigma Forum Magazine, August 2003.
[Link] “Cohen's Kappa.”  Real Statistics, 2020.
[Link] “Ensuring R&R.”  Scott Force; Quality Progress, January 2020.
[Link] “Measurement system analysis with attribute data.”  Keith M. Bower; KeepingTAB #35 (Minitab Newsletter), February 2002.
[Link] Creating Quality.  William J. Kolarik; McGraw-Hill, Inc., 1995.

 
Jody W. Phelps, MSc, PMP®, MBA
Principal Consultant
JayWink Solutions, LLC
jody@jaywink.com
]]>
<![CDATA[The War on Error – Vol. IV:  Get Some R&R - Variables]]>Wed, 12 Aug 2020 14:30:00 GMThttp://jaywinksolutions.com/blog/the-war-on-error-vol-iv-get-some-rr-variables     While you may have been hoping for rest and relaxation, the title actually refers to Gauge R&R – repeatability and reproducibility.  Gauge R&R, or GRR, comprises a substantial share of the effort required by measurement system analysis.  Preparation and execution of a GRR study can be resource-intensive; taking shortcuts, however, is ill-advised.  The costs of accepting an unreliable measurement system are long-term and far in excess of the short-term inconvenience caused by a properly-conducted analysis.
     The focus here is the evaluation of variable gauges.  Prerequisites of a successful GRR study will be described and methodological alternatives will be defined.  Finally, interpretation of results and acceptance criteria will be discussed.
What is Repeatability and Reproducibility?
     To effectively conduct a measurement system analysis, one must first be clear on what comprises a measurement system.  Components of a measurement system include:
  • parts being measured,
  • appraisers (individuals doing the measuring),
  • measurement equipment, including any tools, fixtures, and gauges used to complete a measurement,
  • the environment (temperature, humidity, cleanliness, organization, etc.)
  • instructions, procedures, manuals, etc.,
  • data collection tools (paper and pen, computer and software, etc.),
  • any element, physical or otherwise (e.g. managerial pressure, time constraints), that can influence the performance of measurements, collection or analysis of data.
Repeatability and reproducibility each focus on one of these components.  For a GRR study to be valid, or useful, the remaining components must be held constant.
     Repeatability is an estimate of the variation in measurements induced by the measurement equipment – the equipment variation (EV).  This is also known as “within-system variation,” as no component of the system has changed.  Repeatability measures the variation in measurements taken by the same appraiser on the same part with the same equipment in the same environment following the same procedure.  Changing any of these components during the study invalidates the results.
     Reproducibility is an estimate of the variation in measurements induced by changing one component of the measurement system – the appraiser – while holding all others constant.  This is called appraiser variation (AV) for obvious reasons.  Reproducibility is also referred to as “between-systems variation” to reflect the ability to extend analysis to include multiple sites (i.e. “systems”).  This extended analysis is significantly more complex than a single-system evaluation and is beyond the scope of this presentation.
     Repeatability and reproducibility, collectively, are called “width variation.”  This term refers to the width of the normal distribution of measurement values obtained in a GRR study.  This is also known as measurement precision.  Use of a GRR study can guide improvement efforts that provide real value to an organization.  If resources are allocated to process modifications when it is the measurement system that is truly in need of improvement, the organization increases its costs without extracting significant value from the effort.  Misplaced attention can cause a superbly-performing process to be reported as mediocre – or worse.
 
Preparation for a GRR Study
     Conducting a successful GRR study requires organization and disciplined execution.  Several physical and procedural prerequisites must be satisfied to ensure valid and actionable results are obtained.  The first step is to assign a study facilitator to be responsible for preparation, execution, and interpretation.  The facilitator should have knowledge of the measurement process, the parts to be measured, and the significance of each step to be taken in preparation for and in the execution of the study.  This person will be responsible for maintaining the structure and discipline required to conduct a reliable study.
     Next, the measurement system to be evaluated must be defined.  That is, all components of the system, as discussed in the previous section, are to be identified.  This includes the environmental conditions under which measurements will be taken, part preparation requirements, relevant internal or external standards, and so on.  It is important to be thorough and accurate in this step; the definition of the system in use significantly influences subsequent actions and the interpretation of results.
     The next few steps are based directly on the system definition; they are not strictly sequential.  The stability of the system must be verified; that is, it must be known, prior to beginning a GRR study, that the components of the system will be constant for the duration of the study.  Process adjustment, tooling replacement, or procedural change that occurs mid-study will invalidate the results.
     The parts to be used in the study must be capable of withstanding several measurement cycles. If the measurement method or handling of parts required to complete it cannot be repeated numerous times without causing damage to the parts, other methods of evaluation may be required.
     Review historical data collected by the measurement system, as defined above, to confirm a normal distribution.  If the measurement data is not normally-distributed, there may be other issues (i.e. “special causes”) that require attention before a successful GRR can be conducted.  The more the data is skewed, or otherwise non-normal, the less reliable the GRR becomes.
     Once these prerequisites have been verified, the type of study to be conducted can be chosen.  A “short” study employs the range method, while a “long” study employs graphical analysis and the average and range method or the ANOVA method.  The flowchart presented in Exhibit 1 provides an aid to method selection.  Each method will be discussed further in subsequent sections on execution of GRR studies.
     The range of variation over which measurement system performance will be evaluated must also be defined.  Though other ranges can be used, we will focus on two:  process variation and specification tolerance.  Process variation is the preferred range, as it accurately reflects the realized performance of a process and its contribution to measurement variability.  The specification tolerance is a reasonable substitute in many instances, such as screening evaluations, or when comparisons of processes are to be made.
     Other procedural prerequisites include defining how the measurement sequence will be randomized and how data will be recorded (e.g. automatically or manually, forms to be used).   When the procedural activities are complete, physical preparations can begin.  This may include organizing the area around the subject measurement equipment to facilitate containment of sample parts; the study may require more part storage than is normally used, for example.  Participation of the facilitator may also require some accommodation.
     Physical preparations of sample part storage should include provisions for identification of parts.  Part identification should be known only to the facilitator, for purposes of administration and data collection, to avoid any potential influence on appraisers.
     Once part storage has been prepared, samples can be collected.  Samples should be drawn from standard production runs to ensure that they have been given no special attention that would influence the study.  The number of samples to be collected will be discussed in the sections pertaining to the different study methods.
     The final preparation steps are selection of appraisers and scheduling of measurements.  Appraisers are selected from the pool identified in the system definition and represent the range of experience or skill available.  Appraisers must be familiar with the system to be studied and perform the required measurements as part of their regular duties.
     Scheduling the measurements must allow for part “soak” time in the measurement environment, if required.  The appraisers’ schedules and other responsibilities must also be considered.  The facilitator should schedule the measurements to minimize the impact on normal operations.
     With all preparations complete, the GRR study begins in earnest.  The following three sections present the methods of conducting GRR studies.
 
Range Method
     The range method is called a “short” study because it uses fewer parts and appraisers and requires fewer calculations than other methods.  It does not evaluate repeatability and reproducibility separately; instead an approximation of measurement system variability (R&R) results.  The range method is often used as an efficient check of measurement system stability over time.
     Range method calculations require two appraisers to measure each of five parts in random order.  The measurements and preliminary calculations are presented conceptually in Exhibit 2.
The Gauge Repeatability and Reproducibility is calculated as follows:
     GRR =  R ̅∕d2* ,
where R ̅  is the average range of measurements from the data table (Exhibit 2) and d2* is found in the table in Exhibit 3.  For two appraisers (m = 2) measuring five parts (g = 5), d2* = 1.19105.
     The commonly-cited value is the percentage of measurement variation attributable to the measurement system, %GRR, calculated as follows:
     %GRR = 100% * (GRR/Process Standard Deviation),
where GRR is gauge repeatability and reproducibility, calculated above, and process standard deviation is determined from long-run process data.
     Interpretation of GRR results is typically consistent with established guidelines, as follows:
  • %GRR ≤ 10%:  measurement system is reliable.
  • 10% < %GRR ≤ 30%:  measurement system may be acceptable, depending on causes of variation and criticality of measured feature to product performance.
  • %GRR > 30%:  measurement system requires improvement.
Since the range method provides only a composite estimate of measurement system performance, additional analysis or use of another GRR study method may be required to identify specific improvement opportunities.
 
Average and Range Method
     The average and range method improves on the range method by providing independent evaluations of repeatability and reproducibility in addition to the composite %GRR value.  To perform a GRR study according to the average and range method, a total of 90 measurements are recorded.  Three appraisers, in turn, measure each of ten parts in random order.  Each appraiser repeats the set of measurements twice.
     For each set of measurements, a unique random order of parts should be used, with all measurement data hidden from appraisers.  Knowledge of other appraisers’ results, or their own previous measurement of a part, can directly or indirectly influence an appraiser’s performance; hidden measurement data prevents such influence.  For the same reason, appraisers should be instructed not to discuss their measurement results, techniques, or other aspects during the study.  After completion of the study, such discussion may be valuable input to improvement efforts; during the study, it only serves to reduce confidence in the validity of the results.
     Modifications can be made to the standard measurement sequence described above to increase the efficiency of the study.  Accommodations can be made for appraisers’ non-overlapping work schedules, for example, by recording each appraiser’s entire data set (10 parts x 3 iterations = 30 measurements) without intervening measurements by other appraisers.  Another example is an adaptation to fewer than 10 parts being available at one time.  In this situation, the previously described “round-robin” process is followed for the available parts.  When additional parts become available, the process is repeated until the data set is complete.
     A typical GRR data collection sheet is shown in Exhibit 4.  This form also includes several computations that will be used as inputs to the GRR calculations and graphical analysis.
     Graphical analysis of results is an important forerunner of numerical analysis.  It can be used to screen for anomalies in the data that indicate special-cause variation, data-collection errors, or other defect in the study.  Quickly identifying such anomalies can prevent effort from being wasted analyzing and acting on defective data.  Some example tools are introduced below, with limited discussion.  More information and additional tools can be found in the references or other sources.
     The average of each appraiser’s measurements is plotted for each part in the study on an average chart to assess between-appraiser consistency and discrimination capability of the measurement system.  A “stacked” or “unstacked” format can be used, as shown in Exhibit 5.

Exhibit 5:  Average Charts

Similarly, a range chart, displaying the range of each appraiser’s measurements for each part, in stacked or unstacked format, as shown in Exhibit 6, can be used to assess the measurement system’s consistency between appraisers.

Exhibit 6:  Range Charts

     Consistency between appraisers can also be assessed with X-Y comparison plots.  The average of each appraiser’s measurements for each part are plotted against those of each other appraiser.  Identical measurements would yield a 45° line through the origin.  An example of X-Y comparisons of three appraisers displayed in a single diagram is presented in Exhibit 7.
     A scatter plot, such as the example in Exhibit 8, can facilitate identification of outliers and patterns of performance, such as one appraiser that consistently reports higher or lower values than the others.  The scatter plot groups each appraiser’s measurements for a single part, then groups these sets per part.
     If no data-nullifying issues are discovered in the graphical analysis, the study proceeds to numerical analysis.  Calculations are typically performed on a GRR report form, such as that shown in Exhibit 9.  Values at the top of the form are transferred directly from the data collection sheet (Exhibit 4).  Complete the calculations prescribed in the left-hand column of the GRR report form; the values obtained can then be used to complete the right-hand column.  The formulas provided on both forms result in straightforward calculations; therefore, we will focus on the significance of the results rather their computation.
     The right-hand column of the GRR report contains the commonly-cited values used to convey the effectiveness of measurement systems.  The following discussion summarizes each and, where applicable, offers potential targets for improvement.
     Equipment variation (%EV) is referred to as gauge repeatability, our first “R.”  A universally-accepted limit on repeatability is not available; judgment in conjunction with other relevant information is necessary.  If repeatability is deemed unacceptable, or in need of improvement, potential targets include:
  • within-part variation; e.g. roundness of shaft will affect diameter measurement,
  • wear, distortion, or poor design or fabrication of fixtures or other components,
  • contamination of parts or equipment,
  • damage or distortion of parts,
  • parallax error (analog gauges),
  • overheating of electronic circuits,
  • mismatch of gauge and application.
     Appraiser variation (%AV) is our second “R,” reproducibility.  Like repeatability, it must be judged in conjunction with other information.  If AV is excessive, the measurement process must be observed closely to identify its sensitivities.  Potential sources of variation include:
  • differing techniques used by appraisers to perform undefined portions of the process; (e.g. material handling),
  • quality of training, completeness and clarity of instructions,
  • physical differences of appraisers that can affect placement of parts or operation of equipment (i.e. ergonomics); (e.g. height, strength, handedness),
  • wear, distortion, or poor design or fabrication of fixtures that allows differential location of parts,
  • mismatch of gauge and application,
  • time-based environmental fluctuations; e.g. study conducted across shifts in a facility with day/night HVAC settings.
     The composite value, repeatability and reproducibility (R&R, %GRR) is often the starting point of measurement system discussions due, in large part, to the well-documented and broadly-accepted acceptance criteria (see Range Method section above).  To lower %GRR, return to EV and AV to identify potential improvements.  One of the components may be significantly larger than the other, making it the logical place to begin the search for improvements to the measurement system.
     Part variation (%PV) is something of a counterpoint to R&R.  when %GRR is in the 10 – 30% range, where the system may be acceptable, high %PV can be cited to support acceptance of the measurement system.  If equipment and appraisers contribute zero variability to measurements, %PV = 100%.  This, of course, does not occur in real-world applications; it is an asymptotic target.
     The final calculation on the GRR report, the number of distinct categories (ndc), is an assessment of the measurement system’s ability to distinguish parts throughout the range of variation.  Formally, it is “the number of non-overlapping 97% confidence intervals that will span the expected product variation.”  The higher the ndc, the greater the discrimination capability of the system.  The calculated value is truncated to an integer and should be 5 or greater to ensure a reliable measurement system.
 
     To conclude this section, three important notes need to be added.  First, nomenclature suggests that a GRR study is complete with the calculation of %EV, %AV, and %GRR.  However, %PV and ndc are included in a “standard” GRR study to provide additional insight to a measurement system’s performance, facilitating its evaluation and acceptance or improvement.
      Second, some sources refer to %GRR as the Precision to Tolerance, or P/T, Ratio.  Different terminology, same calculation.
     The final note pertains to the evaluation of a system with respect to the specification tolerance instead of the process variation, as discussed in the Preparation for a GRR Study section.  If the specification tolerance (ST) is to be the basis for evaluation, the calculations of %EV, %AV, %GRR, and %PV on the GRR report (Exhibit 9, right-hand column) are to be made with TV replaced by ST/6.  Judgment of acceptability must also be adjusted to account for the type of analysis conducted.
 
Analysis of Variance Method
     The analysis of variance method (ANOVA) is more accurate and more complex than the previous methods discussed.  It adds the ability to assess interaction effects between appraisers and parts as a component of measurement variation.  A full exposition is beyond the scope of this presentation; we will instead focus on its practical application.
     The additional information available in ANOVA expands the possibilities for graphical analysis.  An interaction plot can be used to determine if appraiser-part interaction effects are significant.  Each appraiser’s measurement average for each part is plotted; data points for each appraiser are connected by a line, as shown in the example in Exhibit 10.  If the lines are parallel, no interaction effects are indicated.  If the lines are not parallel, the extent to which they are non-parallel indicates the significance of the interaction.
     To verify that gauge error is a normally-distributed random variable (an analysis assumption), a residuals plot can be used.  A residual is the difference between an appraiser’s average measurement of a part and an individual measurement of that part.  When plotted, as shown in the example in Exhibit 11, the residuals should be randomly distributed on both sides of zero.  If they are not, the cause of skewing should be investigated and corrected.
     Numerical analysis is more cumbersome in ANOVA than the other methods discussed.  Ideally, it is performed by computer to accelerate analysis and minimize errors.  Calculation formulas are summarized in the ANOVA table shown in Exhibit 12.  A brief description of each column in the table follows.
  • Source:  source, or cause, of variation.
  • DFdegree of freedom attributed to the source, where k = the number of appraisers, n = the number of parts, and r = the number of trials.
  • SS: sum of squares; the deviation from the mean of the source.  Calculations are shown in Exhibit 13.
  • MSmean square; MS = SS/DF.
  • FF-ratio; F = MSAP/MSe; higher values indicate greater significance.
  • EMSexpected mean square; the linear (additive) combination of variance components.  ANOVA differentiates four components of variation – repeatability (EV), parts (PV), appraisers (AV), and interaction of appraisers and parts (INT).  The calculations used to estimate these variance components are shown in Exhibit 14.  Negative components of variance should be set to zero for purposes of the analysis.
     Finally, calculations analogous to those in the right-hand column of the GRR report used in the average and range method (Exhibit 9) can be performed.  These calculations, shown in Exhibit 15, define measurement system variation in terms of a 5.15σ spread (“width”), or a 99% range.  This range can be expanded to 99.73% (6σ spread) by substituting 5.15 with 6 in the calculations.  The ubiquity of “six sigma” programs may make this option easier to recall and more intuitive, facilitating use of the tool for many practitioners.
     The notes at the conclusion of the Average and Range Method section are also applicable to ANOVA.  The additional calculations are shown, with ANOVA nomenclature, in Exhibit 16.  The acceptance criteria also remain the same.  The advantage that ANOVA provides is the insight into interaction effects that can be explored to identify measurement system improvement opportunities.
     The three methods of variable gauge repeatability and reproducibility study discussed – range method, average and range method, and ANOVA – can be viewed as a progression.  As measured features become more critical to product performance and customer satisfaction, the measurement system requires greater attention; that is, more accurate and detailed analysis is required to ensure reliable performance.
     The progression, or hierarchy, of methods is also useful for those new to GRR studies, as it allows basic concepts to be learned, then built upon.  Only an introduction was feasible here, particularly with regards to ANOVA.  Consult the references listed below, and other sources on quality and statistics, for more detailed information.
 
     JayWink Solutions awaits the opportunity to assist you and your organization with your quality and operational challenges.  Feel free to contact us for a consultation.
 
     For a directory of “The War on Error” volumes on “The Third Degree,” see “Vol. I:  Welcome to the Army.
 
References
[Link] “Statistical Engineering and Variation Reduction.”  Stefan H. Steiner and R. Jock MacKay;  Quality Engineering, 2014.
[Link] “An Overview of the Shainin SystemTM for Quality Improvement.”  Stefan H. Steiner, R. Jock MacKay, and John S. Ramberg;  Quality Engineering, 2008.
[Link] “Measurement Systems Analysis,” 3ed.  Automotive Industry Action Group, 2002.
[Link] “Introduction to the Gage R & R.”  Wikilean.
[Link] “Two-Way Random-Effects Analyses and Gauge R&R Studies.”  Stephen B. Vardeman and Enid S. VanValkenburg; Technometrics, August 1999
[Link] “Discussion of ‘Statistical Engineering and Variation Reduction.’”  David M. Steinberg;  Quality Engineering, 2014.
[Link] “Conducting a Gage R&R.”  Jorge G. Tavera Sainz; Six Sigma Forum Magazine, February 2013.
[Link] Basic Business Statistics for Managers.  Alan S. Donnahoe; John Wiley & Sons, Inc., 1988.
[Link] Creating Quality.  William J. Kolarik; McGraw-Hill, Inc., 1995.

 
Jody W. Phelps, MSc, PMP®, MBA
Principal Consultant
JayWink Solutions, LLC
jody@jaywink.com
]]>
<![CDATA[The War on Error – Vol. III:  A Tale of Two Journeys]]>Wed, 29 Jul 2020 14:30:00 GMThttp://jaywinksolutions.com/blog/the-war-on-error-vol-iii-a-tale-of-two-journeys     There is a “universal sequence for quality improvement,” according to the illustrious Joseph M. Juran, that defines the actions to be taken by any team to effect change.  This includes teams pursuing error- and defect-reduction initiatives, variation reduction, or quality improvement by any other description.
            Two of the seven steps of the universal sequence are “journeys” that the team must take to complete its problem-solving mission.  The “diagnostic journey” and the “remedial journey” comprise the core of the problem-solving process and, thus, warrant particular attention.
The Universal Sequence
     The universal sequence defines a problem-solving framework, of which, the journeys are critical elements.  Before embarking on discussions of the journeys, readers should be familiar with the framework and the journeys’ place in it.  For this purpose, a very brief introduction to the seven steps of the universal sequence follows:
(1) Provide proof of the need.  Present data and information that demonstrates the need for improvement; typically, this is translated into financial terms (e.g. scrap cost, contribution margin, etc.) for management review.
(2) Identify project(s).  Define projects in terms of specific problems that create the need and link them to the breakthrough desired.
(3) Organize for improvement.  Assign responsibility for high-level oversight of the improvement program (sponsors that authorize funds) and management of each project; assign team members to execute project tasks.
(4) Take the diagnostic journey.  Identify the root cause of the problem to be solved.
(5) Take the remedial journey.  Implement corrective action(s) to eliminate the root cause.
(6) Overcome resistance to change.  Communicate with those affected by the change, encourage their participation in the transformation process, and allow them time to adjust.
(7) Sustain the improvement.  Update or create new standards and monitoring and controlling systems to prevent regression.
     With the framework of the universal sequence now defined, it is time for a more detailed look at the diagnostic and remedial journeys.
 
The Diagnostic Journey
     The diagnostic journey takes a problem-solving team “from symptom to cause.”  The first leg of this journey consists of understanding the symptoms of a problem by analyzing two forms of evidence.  First, the team must ensure that all parties involved use shared definitions of special and common terms used to describe a situation, defect, etc.  It is imperative that the words used to communicate within the team provide clarity; inconsistent use of terminology can cause confusion and misunderstanding.  Second, autopsies (from Greek autopsia – “seeing with one’s own eyes”) should be conducted to deepen understanding of the situation.  An autopsy can also confirm that the use of words is consistent with the agreed definition, or offer an opportunity to make adjustments in the early stages of the journey.
     Understanding the symptoms allows the team to begin formulating theories about the root cause.  This can be done with a simple brainstorming session or other idea-generation and collection technique.  A broad array of perspectives is helpful (e.g. quality, operations, management, maintenance, etc.).
     Processing a large number of theories can be simplified by organizing them in a fishbone diagram, mind map, or other format.  Grouping similar or related ideas, and creating a visual representation of their interrelationships, facilitates the selection of theories to be tested and experimental design.  Visualized grouping of suspect root causes aids the design of experiments that can test multiple theories simultaneously, contributing to an efficient diagnosis.
     Once the investigation has successfully identified the root cause, the diagnostic journey is complete.  The team must transition and embark on the remedial journey.
 
The Remedial Journey
     The remedial journey takes the team “from cause to remedy;” it begins with the choice of alternatives.  A list of alternative solutions, or remedies, should be compiled for evaluation.  Remedies should be proposed considering the hierarchy (preference) or error-reduction mechanisms discussed in “Vol. II:  Poka Yoke (What Is and Is Not).”  All proposed remedies should satisfy the following three criteria:
  • it eliminates or neutralizes the root cause;
  • it optimizes cost;
  • it is acceptable to decision-makers.
The first and third criteria are straightforward enough – if it does not remove the root cause, it is not a remedy; if it is not acceptable to those in charge, it will not be approved for implementation.  Cost optimization, however, is a bit more complex.  Life-cycle costs, opportunity costs, transferred costs, and the product’s value proposition or market position all need to be considered.  See “The High Cost of Saving Money” and “4 Characteristics of an Optimal Solution” for more on these topics.
     To verify that a proposed remedy, in fact, eliminates the root cause, “proof-of-concept” testing should be conducted.  This is done on a small scale, either in a laboratory setting or in the production environment, minimizing disruption to the extent possible.  If successful on a small scale, implementation of the remedy can be ramped up in production.  Successful full-scale implementation should be accompanied by updates to instructions, expanded training, and other activities required to normalize the new process or conditions.
     Normalization activities taking place during the remedial journey may overlap with the last two steps of the universal sequence for quality improvement; this overlap may help to ensure that the sequence is completed.  The organization can fully capitalize on the effort only when the entire sequence has been completed.  Repeating the sequence is evidence of a continuous improvement culture, whether nascent or mature.
 
     The diagnostic and remedial journeys are defined as they are to emphasize three critical, related characteristics.  First, the diagnostic and remedial journeys are separate, independent endeavors.  Each requires its own process to complete successfully.  Second, both are required components of an effective quality-improvement or problem-solving project.  Finally, the remedial journey should not commence until the diagnostic journey is complete – diagnosis precedes remedy.
     These points may seem obvious or unremarkable; however, the tendency of “experts” to jump directly from symptom to remedy is too common to ignore.  Doing so fails to incorporate all available evidence, allowing bias and preconceived notions to drive decisions.  The danger of irrational decision-making – acting on willfully incomplete information – is a theme that runs throughout the “Making Decisions” series on “The Third Degree.”
     The diagnostic and remedial journeys are also described as the core of a successful problem-solving process, but not its entirety.  Each step in the universal sequence, when performed conscientiously, improves team effectiveness, increasing efficiency and probability of project success.
 
     General questions are welcome in the comments section below.  To address specific needs of your organization, please contact JayWink Solutions for a consultation.
 
     For a directory of “The War on Error” volumes on “The Third Degree,” see “Vol. I:  Welcome to the Army.”
 
References
[Link] Juran’s Quality Handbook.  Joseph M. Juran et al; McGraw-Hill.
[Link] “Quality Tree:  A Systematic Problem-Solving Model Using Total Quality Management Tools and Techniques.”  J. N. Pan and William Kolarik; Quality Engineering, 1992.
[Link] “Shainin Methodology:  An Alternative or an Effective Complement to Six Sigma?”  Jan Kosina; Quality Innovation Prosperity, 2015.
[Link] “Statistical Engineering and Variation Reduction.”  Stefan H. Steiner and R. Jock MacKay;  Quality Engineering, 2014.
[Link] “An Overview of the Shainin SystemTM for Quality Improvement.”  Stefan H. Steiner, R. Jock MacKay, and John S. Ramberg;  Quality Engineering, 2008.
[Link] “7 Stages of Universal Break-Through Sequence for an Organisation.”  Smriti Chand

 
Jody W. Phelps, MSc, PMP®, MBA
Principal Consultant
JayWink Solutions, LLC
jody@jaywink.com
]]>
<![CDATA[The War on Error – Vol. II:  Poka Yoke (What Is and Is Not)]]>Wed, 15 Jul 2020 14:30:00 GMThttp://jaywinksolutions.com/blog/the-war-on-error-vol-ii-poka-yoke-what-is-and-is-not     Of the “eight wastes of lean,” the impacts of defects may be the easiest to understand.  Most find the need to rework or replace a defective part or repeat a faulty service, and the subsequent costs, to be intuitive.  The consequences of excess inventory, motion, or transportation, however, may require a deeper understanding of operations management to fully appreciate.
     Conceptually, poka yoke (poh-kah yoh-keh) is one of the simplest lean tools; at least it was at its inception.  Over time, use of the term has morphed and expanded, increasing misuse and confusion.  The desire to appear enlightened and lean has led many to misappropriate the term, applying it to any mechanism used, or attempt made, to reduce defects.  Poka yoke is often conflated with other process control mechanisms, including engineering controls and management controls.
     To effectively reduce the occurrence of errors and resultant defects, it is imperative that process managers differentiate between poka yoke devices, engineering controls, and management controls.  Understanding the capabilities and limitations of each allows appropriate actions to be taken to optimize the performance of any process.
Poka Yoke
     Poka yoke is a Japanese term; the commonly accepted translation is “avoid mistakes” (yokeru ≈ avoid; poka ≈ mistake), or “mistake-proofing” in a production context.  Poka yoke devices come in two forms:
(1) a device, such as a fixture or tool, that interfaces with a part in such a way that prevents an error from occurring.  Example:  a nearly-symmetrical part is drilled in an off-center location; incorrect orientation of the part when drilled will render it scrap.  To prevent this easily-committed error, a fixture is used to engage the asymmetric features, properly positioning the part for the drilling operation.  Operators are no longer required to closely inspect the part, increasing productivity of the operation, while eliminating scrap due to incorrect orientation of the part.  This is called the contact method; see Exhibit 1 for a visual representation of this example.

Exhibit 1:  Contact Method Poka Yoke Device

(2) a device that makes an error or defect obvious.  Example:  each product to be assembled is built from a kit containing every component required; nothing extraneous is allowed in the kit.  Before the product leaves the assembly station, a glance at the kitting tray confirms that it is empty and, therefore, no components are missing from the assembly.  If a component remains in the kit, the error can be corrected immediately, before it affects downstream processes or the customer experience.  This is a fixed-value type poka yoke; see Exhibit 2 for an example.
     Poka yoke devices can protect processes from various types of errors (see Vol. I for a brief introduction to error types).  Many inadvertent errors caused by the pressures of production requirements (rate), distraction, fatigue, or other factors can be prevented by using poka yokes.  An operator’s lack of experience or forgetfulness will have less impact on outcomes when poka yoke devices are in use.  Identification errors can be eliminated by using a device that amplifies the differences between similar components (see Exhibit 1 example).  In highly-contentious environments, the ability of poka yoke devices to make willful and “intentional” errors (see comments on sabotage in Vol. I) more difficult to commit adds greatly to their value.
     Operator 100% inspection is prohibitively expensive for most operations, particularly high-volume production.  Poka yoke devices, on the other hand, are typically inexpensive (and a non-recurring expense), making cost-effective 100% inspection attainable.
     Poka yokes are often completely passive, drawing little attention from operators.  Any degradation in performance that may occur is easily overlooked.  For this reason, all poka yoke devices must be included in equipment maintenance plans and verified periodically.  Poka yoke verification with “limit samples” or other test pieces is typically done at every startup and shift change.  If a device accepts an incorrect part or rejects a correct part, proper function must be restored before the process is allowed to run.  For example, a fixture that has worn to the point that it accepts oversize parts, or no longer holds parts securely, should be replaced before any more parts are processed.
     The purest form of poka yoke is a simple, mechanical fixture – no moving parts, no electronics – in which correct parts can be placed and incorrect parts cannot.  As features are added, simplicity is sacrificed and, eventually, the term “poka yoke” becomes “fuzzy” and confusing, as discussed in the introduction.  To minimize confusion caused by fuzziness, we will now turn to descriptions of the other mechanisms of error reduction mentioned:  engineering controls and management controls.
 
Engineering Controls
     As the performance and reliability of engineering controls improved, it became easier to conflate them with poka yoke devices.  High reliability and consistent performance allows these systems to fade from the consciousness of operators and managers alike.  It may be easy to do, but this class of controls should not be forgotten or ignored.  Unlike poka yoke, most engineering controls are active devices; they are components of equipment and systems that require regular attention to remain safe and productive.
     Many engineering controls in common use are operated via PLCs (programmable logic controllers), though this is not exclusively the case.  PLC-operated controls may include proximity sensors, limit (travel) switched, pressure switches, encoders, force transducers, or other instruments.  PLC controls can also be safety devices, such as light curtains or two-hand anti-tie-down activation switches.  The program run by the PLC is also an engineering control.
      Whenever possible, engineering controls must be designed to “fail safe.”  That is, failure of a control to function properly must prevent further operation of equipment or processing of parts.  Doing so eliminates consumer risk (Type II) by ensuring that parts cannot be produced without completing all quality verifications.  Producer risk (Type I) is also kept low; production interruptions receive prompt attention to identify and correct malfunctions.
     Where fail-safe operation is not possible, frequent verification of proper function is required.  Clearly, fail-safe operation is preferred; achieving it requires equipment designed to support the initiative, selection of appropriate instruments (sensors, switches, etc.), and proper programming.
     Non-PLC engineering controls include masking of parts prior to painting or plating, chemical “recipes,” process control limits, special work instructions, technical standards, and special handling equipment used when manual handling could affect product quality (e.g. contamination).  Fail-safe operation of these controls may not always be feasible, but may be possible for some operations if used in conjunction with sensors or other PLC-operated controls.
     Engineering controls generally provide less certainty than poka yoke devices.  For example, a sensor used to verify the presence and proper position of a part before initiating a machine cycle could become dirty, causing it to always allow the machine to cycle.  Defective product, equipment damage, and injury to personnel could result.  This is why sensor selection, proper programming, and functional verifications are so important.  It also reinforces the need to understand the difference between a poka yoke and an engineering control.
 
Management Controls
     Management controls consist, in large part, of policies, procedures, and instructions.  Work instructions – engineering controls – are focused on the operation of a specific machine or process, while instructions, as management controls, are broadly applicable.  For example, instructions may define a standard format of time and production reporting to be used by all employees for all equipment and processes.
     A common example of a procedure used for management control is that to be followed when defective product is found.  It could include steps needed to contain and replace defective goods and to identify and correct the source of the defect.  A number of procedures are also used to meet environmental, safety, and other regulations.
     Policies are typically published by Human Resources to define the company’s expectations in legal and ethical matters and to assure employees of fair treatment.  Acceptable behavior in financial transactions, interpersonal relationships, and interactions with customers, suppliers, and public officials are examples of topics typically addressed in a company’s policies.
     Other management controls include production quotas, information systems access, assignment of authority (to halt production, initiate disciplinary action, etc.), and training (development plans and instruction).
 
In the Service Sector
     The previous discussion and examples focused on discrete manufacturing operations.  Many of the same concerns expressed also exist in process and service industries.  The misappropriation of the term poka yoke seems especially egregious in the service sector.
     The presence of the customer in service operations increases the potential for errors – either customer or provider can make a mistake that effects service delivery.  The variability that customers introduce also makes the development of poka yoke devices difficult, if not impossible.  Poka yokes are designed around a small number of known variations; services are often much more varied and unpredictable.
     The wide range of services available also limits the application of technology-based engineering controls.  The more personalized the service, the less likely that sophisticated technology can be used to prevent errors.  Applications are often limited to “back office” operations; that is, routine steps performed in the customer’s absence.  Nontechnology-based engineering controls, such as process control limits and work instructions, are likely to be much more prevalent in service operations.
     Given the challenges discussed above, it will likely come as no surprise that providers rely heavily on management controls to maintain service quality.  Broadly-applicable procedures for customer engagement are necessary to provide a consistent experience for customers with very different needs.  It is highly-developed and well-executed management controls that are primarily responsible for the ability to deliver consistent high-quality personal service.
     A benchmark for consistency, fast food restaurants depend on management controls for the customer experience (order entry, wrapping and presentation, etc.), but also employ significant engineering controls behind the scenes.  From temperature controllers and timers to the layout of the counter and drink station, engineering controls contribute greatly to providing a consistent customer experience in every location.
     Poka yoke devices exist in the fast food environment (e.g. egg ring), but are much less common than other types of error-reduction mechanisms.  Impressive levels of consistency and misunderstanding of the term lead some to identify service providers’ error-reduction efforts as poka yokes.  It is a testament to their effectiveness, but it is mostly incorrect.
 
Hierarchy of Controls
     All of the error-reduction mechanisms discussed support a “no fault forward” philosophy.  They are complementary approaches; there is, however, an implied hierarchy.  Prevention of errors is strongly preferred to other error-reduction schemes, followed by “capture at the source.”   If neither of these can be achieved, the sooner an error or defect is detected, the better.  The worst case is for a customer to receive a faulty product or unsatisfactory service.  This clearly places poka yoke devices atop of the hierarchy.  Engineering controls can be used to prevent or detect errors and defects, positioning them next in the hierarchy.  Management controls are at the bottom of the hierarchy; this is where responses to failures of higher-level controls are typically defined.  They rely exclusively on human intervention and, thus, are less reliable.
     An alternate formulation of this hierarchy considers the target of each control mechanism.  Poka yoke devices target part control, ensuring that every part processed is acceptable.  Engineering controls are used for process control, ensuring that each iteration of a process is performed consistently across time, parts, and operators.  Management controls target consistent behavior of people across time and circumstances.
     The hierarchy also represents relative cost-effectiveness typical of the control mechanism categories.  Poka yoke devices are typically low-cost and highly effective.  Management controls are of moderate cost, but their effectiveness is dependent upon an organization’s culture and the quality of the controls developed.  Engineering controls occupy the middle ground; they can be extremely effective, but can also become quite costly.  Technology solutions, in particular, must be well-designed for their purpose to maintain both affordability and effectivity.
 
     Being “tough on the process, easy on the people” signals to employees that they are valued in ways that platitudes (“our people are our most valuable resources”) simply cannot communicate.  An organization’s commitment to developing and maintaining appropriate error-reduction mechanisms communicates standards, values, and expectations more clearly than any speech or memo ever will.
 
     For additional assistance with error reduction, feel free to ask a question in the comments section below or contact JayWink Solutions for guidance customized to your specific situation.
 
     For a directory of “The War on Error” volumes on “The Third Degree,” see “Vol. I:  Welcome to the Army.”
 
References
[Link] “Quality and Productivity Improvement:  A Study of Variation and Defects in Manufacturing.”  Edgardo J. Escalante; Quality Engineering, March 1999.
[Link] “What is Mistake Proofing?”  American Society for Quality.
[Link] “Poka-yoke.”  Wikipedia.
[Link] “What is the Poka Yoke Technique?”  Kanbanize.
[Link] “Factsheet:  “‘Poka Yoke’ error proofing.”  Chris Turley, LeanQCD.com.
[Link] “Poka-Yoke.”  iSixSigma
[Link] “Error Proof the Pokayoke to Build in Quality.”  Jon Miller, Gemba Academy; July 26, 2006.
[Link] “Poka Yoke in the service sector.”  Chris Turley, LeanQCD.com.
[Link] “Poka-Yoke.”  Reference for Business.
[Link] “A Rose by Any Other Name…”  Glenn Nausley, Motorized Vehicle Manufacturing; June 20, 2018.
[Link] “Poka-Yoke is Not a Joke.”  Michael Schrage, Harvard Business Review; February 4, 2010.
[Link] The New Lean pocket Guide XL.  Don Tapping; MCS Media, Inc., 2006.
[Link] The Lean 3P Advantage.  Allan R. Coletta; CRC Press, 2012.

 
Jody W. Phelps, MSc, PMP®, MBA
Principal Consultant
JayWink Solutions, LLC
jody@jaywink.com
]]>
<![CDATA[The War on Error – Vol. I:  Welcome to the Army]]>Wed, 01 Jul 2020 14:30:00 GMThttp://jaywinksolutions.com/blog/the-war-on-error-vol-i-welcome-to-the-army     Every organization wants error to be kept at a minimum.  The dedication to fulfilling this desire, however, often varies according to the severity of consequences that are likely to result.  Manufacturers miss delivery dates or ship faulty product; service providers fail to satisfy customers or damage their property; militaries lose battles or cause civilian casualties; all increase the cost of operations.
     You probably have some sensitivity to the effects errors have on your organization and its partners.  This series explores strategies, tools, and related concepts to help you effectively combat error and its effects.  This is your induction; welcome to The War on Error.
Errors vs. Effects
     To properly guide error-reduction efforts in your organization, it is important to make a clear distinction between errors and the effects of those errors.  An error is an unintended deviation from the process plan.  This definition highlights another important distinction – that between errors and sabotage.  Sabotage is an intentional act; it requires more than a typical error-reduction effort to effectively combat it.  As such, it is beyond the scope of this series, which focuses on the fallibility of conscientious operators and naturally-occurring phenomena.
     The effects of errors are known by many names; common general terms include defect, nonconformance, and noncompliance.  More-descriptive terms, related to a process, specification, or regulatory requirement, are used to specify the nature of the defect (e.g. electrical short), nonconformance (e.g. orange peel), or noncompliance (e.g. acidic effluent).
     The relationship between errors and their effects further clarifies the distinction.  Errors provide the opportunity – and high probability – for defects (or other term) to manifest.  More simply, errors cause defects.
     Effective error reduction requires curing the disease (errors and their causes), rather than merely treating its symptoms (repairing defects).  Removing unsatisfactory product, based on 100% inspection, for example, may reduce the number of defects perceived by customers, but it does not reduce the number of errors or the associated costs.  Discovery of effective countermeasures is facilitated in Failure Modes and Effects Analysis (FMEA) by explicitly separating discussion of failure modes (i.e. defects), effects of failures (i.e. symptoms, how defects are discovered), and causes of failures (errors, disease).
 
Types of Error
     There are many types of error that can occur, negatively impacting the output of a process.  Those described below are only a sample of common, generalized examples.  There will be others inherent in every process and requirement encountered.
     A lack of experience, training, or documented standards can cause any number of errors to occur.  A common problem is the acceptance of substandard output because the standard has not been clearly communicated or is highly subjective.
     Operators’ physical and cognitive performance declines with fatigue.  Monotonous or uninteresting work accelerates the onset of fatigue.  Accuracy or precision of motion may suffer, as well as the ability to keep pace with other process activities.  This could cause required operations to be skipped or performed incompletely.  It could even become hazardous, resulting in injury to workers, customers, or bystanders.
     Equipment failures can also cause a variety of errors.  A malfunctioning gauge, for example, may mislead an operator to believe that all process parameters are within acceptable limits.  The real situation, however, could be causing defective output with no indication that adjustment is needed.
     Identification errors often occur when two items or situations are difficult to distinguish.  Making assumptions or quickly reaching a conclusion without careful consideration may result in selecting incorrect material, for example, or pursuing an inappropriate course of action.
     There are many more potential errors; some are universal and some are unique to specific processes and circumstances.  Some authors include “intentional errors” in this discussion, but this term is a contradiction.  As mentioned previously, sabotage is better suited to a separate discussion.
 
Modes of Error Reduction
     There are three modes of error reduction in which activities take place:  prevention, detection, and correction.
     In prevention mode, as the name implies, activities are focused on preventing errors from occurring.  It is in this mode that standards and instructions are developed, training is conducted, equipment is maintained, and other efforts are made to reduce the probability of an error being made.
     Successful error prevention requires system-level analysis to identify likely errors; preventive measures can then be designed and implemented.  These measures are recorded in the FMEA under Current Process Controls – Prevention.  Prevention is typically the most cost-effective strategy; it limits scrap and rework costs and often reduces monitoring cost.
     During and following a process, detection mode activities attempt to discover any errors that occurred despite efforts to prevent them.  In manufacturing processes, defective product is removed, preventing it from reaching the customer.  The nature of service operations – specifically, customer involvement – may make this impossible.
     Successful error detection requires objective standards, training, and reliable instruments suited to the task.  The mechanisms used are recorded in the FMEA under Current Process Controls – Detection.  Detection controls may reveal the effects of a failure or detect the failure mode directly.
     Correction mode encompasses the worst-case scenario.  Prevention efforts have failed; if a customer discovers an error, detection efforts have also failed.  In service operations, the customer and provider may discover an error simultaneously; this situation provides no opportunity to correct the error before the customer is affected by it.  Correction efforts, therefore, may extend beyond the actual error to repairing the customer relationship and rebuilding confidence in the provider’s abilities.
     Manufacturers may recover more easily from a customer’s receipt of faulty product; the transaction is somewhat impersonal when compared to services.  However, repeated transgressions will damage a company’s reputation and product replacement will no longer be sufficient to recover from errors.
     If detection controls are effective, each correction required provides information that can be used to improve the upstream process by strengthening prevention measures.  Capitalizing on each opportunity is imperative; scrap, rework, and warranty costs can otherwise destroy a company’s profitability.
 
Automation
     Many people seem to think that automation is the answer to every problem.  After all, once the automation is programmed, it never forgets or gets tired; it is always precise and will never destroy itself to spite the management.  However, all processes are not amenable to automation.  Some that can be automated do not provide a cost-effective solution.  Furthermore, automation is susceptible to errors of its own if it is not fully compatible with the task.  Automation cannot apply judgment or understand nuance; these are human capabilities.  Any situation that has not been predicted and programmed cannot be handled properly.  The variety and uniqueness of service operations further confound attempts to automate, particularly those that value compassion and empathy.
     To be clear, automation certainly has a role to play and where appropriate, it will enter the discussion.  It is by no means, however, the default solution; therefore, it will not be a dominant topic in this series.
 
     Future volumes of “The War on Error” will explore tools and techniques that can be used to solve problems and reduce errors.  The series can be taken a la carte; readers are encouraged to “mix and match,” customizing a tool set that best fits the needs and capabilities of each individual and organization.
 
     Readers are also encouraged to submit questions or topic suggestions in the comments section below.  If more personalized guidance is required, please use the Contact page to provide background information.  JayWink can also provide materials, conduct training, lead or supplement your organization’s error-reduction efforts; schedule a consultation for details.  We look forward to assisting you and your organization.
 
References
[Link] “Quality and Productivity Improvement:  A Study of Variation and Defects in Manufacturing.”  Edgardo J. Escalante; Quality Engineering, March 1999.
[Link] The New Lean pocket Guide XL.  Don Tapping; MCS Media, Inc., 2006.

Jody W. Phelps, MSc, PMP®, MBA

Principal Consultant
JayWink Solutions, LLC
jody@jaywink.com
 
Directory of “The War on Error” entries on “The Third Degree.”
Vol. I:  Welcome to the Army (1Jul2020)
Vol. II:  Poka Yoke (What Is and Is Not) (15Jul2020)
Vol. III:  A Tale of Two Journeys (29Jul2020)
Vol. IV: Get Some R&R - Variables (12Aug2020)
Vol. V: Get Some R&R - Attributes (26Aug2020)
Vol. VI:  Six Sigma (9Sep2020)
Vol. VII:  The Shainin System (23Sep2020)
Vol. VIII:  Precontrol (7Oct2020)
]]>
<![CDATA[Making Decisions – Vol. VI:  Get Out the Vote]]>Wed, 17 Jun 2020 14:30:00 GMThttp://jaywinksolutions.com/blog/making-decisions-vol-vi-get-out-the-vote     Previous volumes of “Making Decisions” have alluded to voting processes, but were necessarily lacking in detail on this component of group decision-making.  This volume remedies that deficiency, discussing some common voting systems in use for group decision-making.  Some applications and issues that plague these systems are also considered.
     Although “voting” is more often associated with political elections than decision-making, the two are perfectly compatible.  An election, after all, is simply a group (constituency) voting to decide (elect) which alternative (candidate) to implement (inaugurate).  Many descriptions of voting systems are given in the context of political elections; substituting key words, as shown above, often provides sufficient understanding to employ them for organizational decision-making.
General Information
     Although it has not been stressed in this series, it is important to remember that decision-making is not always focused on selecting a single alternative.  It can also be used to select multiple alternatives or to rank all acceptable or desirable alternatives.  As such, voting systems can be employed to identify a single “winner” or multiple “winners” or to prioritize several alternatives.
     Ties are possible in many voting systems; tie-breaking provisions should be specified before voting begins.  Ties can be broken in a number of ways, including repeating a vote or combinations of methods.  Tie-breakers will not be discussed in detail; basic knowledge of voting systems will guide a group’s choice of method.
     There are many more systems – and variations on them – available than will be discussed here.  An online search for “electoral systems” provides ample resources for further inquiry.  Discussions of voting system types, jurisdictions or organizations that employ certain systems, potential misuse (e.g. manipulation), and other related topics can also be found.
     While most of these topics will be left to the reader to explore independently, we will discuss a topic that most people consider central to a discussion of voting:  “fairness.”
 
Voting System Fairness
     There are many fairness criteria that can be considered for any voting system.  Here, we will limit the discussion to four commonly-cited criteria.  These four fairness criteria are summarized below.
(1) Condorcet criterion:  A candidate that is preferred to all others in pairwise, or “head-to-head,” comparisons is a Condorcet winner.  A voting system that always selects the Condorcet winner satisfies the Condorcet criterion.
(2) Monotonicity:  Increasing preference for an alternative should not reduce its prospects for selection.  Likewise, reducing preference for an alternative should not improve its prospects for selection.  A voting system is monotonic if it impossible to prevent selection of an alternative by raising its ranking while the relative rankings of all other alternatives remain unchanged.  Attempts to alter the outcome by manipulating rankings of a single alternative will not succeed in a monotonic voting system.
(3) Majority criterion:  If an alternative garners the highest preference of a majority of voters, that alternative should be selected.  A voting system that allows selection of an alternative with only minority support, when majority support exists, violates the majority criterion.
(4) Independence of Irrelevant Alternatives (IIA):  The introduction or removal of a non-winning candidate should not change the rankings of other candidates or the outcome of an election.  If a final selection is changed due to the removal of a non-winning alternative, the voting system has violated the IIA criterion.  (For discussion of a similar issue, see rank reversal in “Vol. III:  Analytic Hierarchy Process.”)
     Arrow’s Impossibility Theorem states that no voting system will satisfy all fairness criteria in all circumstances.  The preceding limited presentation is intended only to introduce the reader to the relevance of fairness criteria to the selection of a voting system to be used in an organization.  These and others should be considered in the context of a specific decision-making group before defining a voting system to be used.
 
Voting System Examples
     A number of voting systems are summarized below.  The process followed to reach a decision is described.  For comparison, several of the examples will use the same sample voting data.  A simplified “theoretical” data set is provided in Exhibit 1 for this purpose.  The example presented includes four alternatives (W, X, Y, Z); voters are divided among four voting blocs with unique preference profiles (A, B, C, D).  The percentage (or number) of voters expressing each set of preferences is also shown.
     Before we begin describing voting systems, it is important to note that many are known by multiple names.  Also, sources vary on the definition of some terms and specific details of methods.  These differences need not be alarming, but they highlight the need to fully define the voting system in use.  Relying on a name alone may cause confusion and disagreement within a group.
 
Plurality
     Plurality may be the simplest and most common voting system in use.  Choosing from a list of three or more alternatives, each group member casts a single vote for his/her “favorite.”  The alternative with the highest number of votes “wins.”  Plurality votes are susceptible to “spoiler effect;” a spoiler is a candidate that splits the vote with another, resulting in the selection of an alternative that could not otherwise win.
     In the example presented in Exhibit 1, only the 1st-choice votes are considered (ranked votes are not submitted); alternative W wins with 42% of the vote.  If alternative Y were not included, voting bloc C would have voted for Z (bloc C’s 2nd choice), giving Z 43% of the vote and a narrow victory.  Thus, supporters of alternative Z may consider Y a spoiler.
     Plurality was introduced in “Vol. IV:  Fundamentals of Group Decision-Making” as a decision rule.
 
Instant Run-Off Voting (IRV)
     For an instant run-off election, voters submit ranked-preference ballots, such as those compiled in Exhibit 1.  If no candidate receives a majority of 1st-choice votes, the candidate with the fewest 1st-choice votes is eliminated.  Votes cast for the eliminated candidate are redistributed to the remaining candidates according to those voters’ preference profiles.  This process is repeated until a majority winner emerges.  The result is “instant” because no additional votes need be cast; the original ballot contains all information necessary to make a final determination.  The additional “rounds” of voting eliminate the spoiler effect inherent to plurality voting.
     In our example, votes cast for alternative X are transferred to Y; Exhibit 2 shows the results of the first run-off.  With X out of consideration, voting blocs C and D share a preference profile that gives Y 34% of the vote.
     In the second run-off, alternative Z is eliminated, resulting in a two-way race between W and Y.  Alternative W captures the votes originally cast for Z and the victory, with 66% of the vote.  Results of the second and final run-off are shown in Exhibit 3.
     Interestingly, limiting the process to a single run-off would have yielded a very different result.  Alternatives Y and X would have been eliminated, transferring the votes from voting blocs C and D to alternative Z.  This would have given Z a 58% to 42% victory over W.  Defining a process before is critical to ensuring support for a decision.
 
Multi-Voting
     A multi-voting process can be used to narrow the field of alternatives when a large number are under consideration.  The number of votes and number of alternatives to be eliminated in each round can be adjusted to suit the group’s needs, but the “1/3 rule” is a common starting point.  To apply the 1/3 rule, each group member votes for a number of alternatives equal to (or approximately) one third of the number under consideration.  When the votes are tallied, a number of alternatives equal to (or approximately) one third of the number under consideration with the lowest vote counts are eliminated.  Only one vote per alternative per person is allowed; there is no weighting of votes.
     The process is repeated, adjusting the number of votes and eliminations as needed, until a manageable set of alternatives is attained.  A final voting process is then initiated to determine the “overall winner.”  The final vote can be conducted with the group’s choice of method.
     The theoretical ballot data shown in Exhibit 1 could have resulted from a multi-voting process.  Let’s say there were originally 12 alternatives that had not been disqualified.  Two rounds of voting, with four votes cast by each group member and four alternatives eliminated each round leaves four viable alternatives in the running.  We have already subjected this reduced set to plurality and instant run-off votes; demonstrations of other options are forthcoming.
     Rather than picking a single winner, this process could be used to identify the four best projects, for example, all of which will be executed.  The final vote count could also be used to prioritize four initiatives, defining the sequence in which they are to be pursued.
 
Block Voting
     When multiple selections are to be made from a list of alternatives, block voting is one simple option.  Each voter casts a number of votes equal to the number of alternatives to be accepted.  The alternative with the highest vote count is accepted until all openings are filled.
     If two openings are to be filled using the ballot data in Exhibit 1, simply count the 1st- and 2nd-choice votes.  Alternative X is the 1st or 2nd choice of 81% of voters, while Z is the 1st or 2nd choice of 43% of voters.  In this example, no tie-breaker is necessary, but it is possible for alternatives to receive equal numbers of votes.
 
Approval Voting
     Approval voting is a simple system that can be used to select a single winner or multiple winners.  Approval votes can also be used to rank alternatives.  Each voter casts one vote for each alternative that s/he “approves” or deems acceptable.  The alternative(s) with the most “approvals” is/are selected.  Ties could occur; groups using approval voting should be prepared with a defined tie-breaking process, whether selecting “winners” or ranking alternatives.
     Approval voting does not reveal voters’ preferences among alternatives deemed acceptable.  Preferences may need to be accounted for as a tie-breaking factor.
     Returning to the sample ballot data of Exhibit 1, let’s assume that each voter’s first three choices are acceptable to that voter, while the fourth is not.  Let’s also assume that we are accepting two alternatives.  Summing the approval votes for each alternative, W and X are accepted with the approval of 85% and 81% of voters, respectively.
 
Cumulative Voting
     A proportional voting system, such as cumulative voting, is often used to elect multiple members to a board or committee.  It can also be useful for various other decision-making or prioritization tasks.  Each voter is allowed to cast a number of votes according to the following formula:
     For corporate board elections, the weighting factor is the number of shares held by the voter.  In other contexts, the weighting factor must be defined according to the impact of the decision on each voter.  For example, those that will be required to invest the most to implement a decision – in terms of money, time, or other resources – should have the most influence over that decision.  There are no strict rules for setting weighting factors in unregulated environments; there is only the ethical requirement that weighting factors be equitable and defensible.
     Cumulative voting can support a variety of voter strategies.  Votes can be distributed among multiple alternatives to elevate a subset above “the pack,” or others deemed less worthy of support.  Alternately, support can be concentrated on one alternative to ensure its selection, leaving the remaining choices to the preferences of other voters.
     To demonstrate cumulative voting, we again return to Exhibit 1 and make the following assumptions for compatibility and simplification:
  • Two alternatives are to be accepted.
  • Weighting factors are consistent with the voting blocs identified: 
A = 1;   B = 2;   C = 3;   D = 4.
  • Votes in each bloc are divided equally between the first two choices (or rounded up for 1st-choice).
Based on this definition, the cumulative vote totals are:  W = 21; X = 75; Y = 59; Z = 52.  The selection of X and Y demonstrates the significance of weighting factors; despite voting bloc D’s smaller size, its magnified influence resulted in the selection of its first two choices.
 
Borda Count
     The Borda Count is a weighted preferential voting method.  If there are (n) alternatives, a voter’s 1st choice is awarded (n-1) points, his/her 2nd choice is awarded (n-2) points, and so on.  When all ballots have been submitted, the points awarded to each alternative are summed to determine the winner.
     Our sample preference ballot offers four choices.  Therefore, 1st-choice alternatives are awarded three points.  Points awarded are reduced by one for each step down in ranking.  For our theoretical election, according to Exhibit 1, the alternatives accrue the following point totals:
 W:  (42 * 3) + (24 * 1) + (19 * 1) + (15 * 0) = 169
 X:   (15 * 3) + (42 * 2) + (24 * 2) + (19 * 0) = 177
 Y:   (19 * 3) + (15 * 2) + (42 * 1) + (24 * 0) = 129
 Z:   (24 * 3) + (19 * 2) + (15 * 1) + (42 * 0) = 125
Per Borda Count rules, alternative X is declared the winner.  If multiple selections or prioritization is the goal, the point totals can also be used to rank the alternatives for these purposes.
     While Borda Count violates the majority criterion, it is valuable when seeking a consensus decision.  In our example, alternative X has much broader support than any other alternative.  Consider the analysis summarized in Exhibit 4.  Assume that voters whose 1st or 2nd choice is ultimately selected are satisfied with the outcome and those whose 3rd or 4th choice is ultimately selected are disappointed with the result.  The goal is to maximize the former and minimize the latter.  As can be seen in Exhibit 4, this is clearly achieved by selecting alternative X; a majority of voters would be disappointed if any other alternative were selected!
 
Score Voting
     Score or range voting is similar in concept to the evaluations in Analytic Hierarchy Process (See Vol. III).  Voters score each alternative on a scale of 0 to 9 (or other range of choice).  Each is scored relative to the others to indicate the magnitude of preferences among the alternatives.  Those that are unfamiliar can be scored “no opinion.”  The alternative with the highest average score is declared the winner.
     “No opinion” scoring causes great concern for some.  A system of “fake votes” can be instituted to counteract its influence, but the very name of it could cause confusion and distrust of the system.  Voters may also choose to counter it by giving minimum scores (i.e. 0) to unfamiliar candidates in order to avoid the “unknown lunatic wins” scenario.  This occurs when a small but fervent group of supporters score an obscure candidate at the maximum while others offer no opinion of the unknown candidate.  The result is a very high average score – and potential victory – for a candidate with very little popular support.
     “No opinion” voting is unlikely to be relevant in organizational decision-making.  Should an issue arise, however, decision-making groups must be prepared to resolve it.  An alternate approach is offered here:  score each “no opinion” vote at the median of the range.  This approach offers the following advantages:
  • A median score more accurately represents a “no opinion” vote.  If 0 is “bad” and 9 is “good,” 4.5 is a reasonable approximation of “I don’t know.”
  • “No opinion” votes will accurately reflect a lack of support by lowering the average score of “unknown lunatics” instead of allowing fanatics to inflate it.
  • Honest voting can be restored; there is no need to vote defensively.
  • “Fake” votes can be eliminated.  Who wouldn’t want that?
 
Copeland Method
     The Copeland method determines a winner by subjecting all candidates to pairwise comparisons with all others.  To do this, the relative position of two candidates on a preference ballot is considered.  Of the paired candidates, the preferred one receives one vote.  The candidate that receives the most votes wins the pairwise “election” and earns one point.  If there is a tie, each candidate earns ½ point.  When all pairwise match-ups have been evaluated, the points for each candidate are tallied.  The candidate with the most points is declared the winner.
     The preference ballot data in Exhibit 1 requires six pairwise comparisons among the four alternatives.  In the first match-up, W is preferred to X by voting blocs A and C; W receives 42 + 19 = 63 votes.  Alternative X is preferred to W by blocs B and D, receiving 24 + 15 = 39 votes.  Alternative W wins the pairwise match-up and earns one point.  The other pairings are evaluated in the same way.  Results of the pairwise match-ups are summarized below.
Summing the points earned in pairwise comparisons, we get the following totals:
W:  2 points; X:  2 points; Y:  1 point; Z:  1 point.
     Alternatives W and X are tied with two pairwise victories each.  An instant run-off is an obvious choice of tie-breaker; simply return to the W vs. X match-up.  The 63 to 39 victory in the pairwise match-up earns W the overall win using this tie-breaking method.
 
Summary
     As shown by applying various voting systems to a single set of data, the outcome is affected by the method employed.  Trust in a voting system can also be affected by its satisfaction or violation of fairness criteriaExhibit 5 summarizes the performance of the voting systems discussed.  Satisfaction or violation of the four fairness criteria described and the winner(s) chosen by each are presented.
     The choices of fairness criteria to consider depends upon the culture and philosophy of the organization within which voting and decision-making takes place.  The availability of advanced voting technology simplifies the application of multiple voting systems to a single set of data, allowing organizations to evaluate the consistency of outcomes with varying levels of “fairness” as defined by their choices of criteria.  Consistent, fair outcomes can increase confidence in decision-making processes and build a more cohesive team.
 
     As mentioned previously, there are many voting systems and variations thereof available.  Further modification to suit an organization’s unique needs is also acceptable.  Whatever systems, criteria, or guidelines are adopted or developed should be documented in an organizational decision-making standard, as described in “Vol. IV:  Fundamentals of Group Decision-Making.”  This standard is the foundation of consistency and fairness that maximizes organizational support for decisions.
 

     If your organization would like additional guidance on voting systems, voting technology, decision-making standards, or related topics, contact JayWink Solutions for a consultation.
 
References
[Link] “Voting Methods Overview.”
[Link] “All About Voting Methods.”
[Link] “Voting Research - Voting Theory.”  Paul Cuff, Sanjeev Kulkarni, Mark Wang, John Sturm; Princeton University.
[Link] “Voting Theory.”  David Lippman, The OpenTextBookStore, 2017.
[Link] “Fair Representation Voting Methods.”
[Link] “Range Voting.”
[Link] “Mathematical Foundations of AI, Lecture 6:  Social Choice 1.”  Ariel D. Procaccia; Carnegie Mellon University, 2008.
[Link] “Seven Methods for Effective Group Decision-Making.”

 
Jody W. Phelps, MSc, PMP®, MBA
Principal Consultant
JayWink Solutions, LLC
jody@jaywink.com
]]>
<![CDATA[Making Decisions – Vol. V:  Group Techniques]]>Wed, 03 Jun 2020 14:30:00 GMThttp://jaywinksolutions.com/blog/making-decisions-vol-v-group-techniques     “Fundamentals of Group Decision-Making” (Vol. IV) addressed structural attributes of decision-making groups.  In this volume, we discuss some ways a group’s activities can be conducted.  An organization may employ several different techniques, at different times, in order to optimize the decision-making process for a specific project or group.
     The following selection of techniques is not comprehensive; organizations may discover others that are useful.  Also, an organization may develop its own technique, often using a commonly-known technique as a foundation on which to create a unique process.  The choice or development of a decision-making process must consider the positive and negative impacts – potential or realized – on decision quality, efficiency, and organizational performance factors.
     It is possible to reach effective decisions with an unstructured approach in which the group informally discusses and debates options in an open forum.  However, it is a very rare and special group that can routinely operate in this fashion without falling victim to one or more of the caveats discussed in Vol. IV.  Given the importance of decision-making to organizational performance, it is advisable to define a process to be followed before the group convenes, or as the group’s first order of business.  Some options are summarized below.
 
Nominal Group Technique
     For the Nominal Group Technique, each member of the group independently compiles a list of ideas and proposals prior to meeting.  When the group convenes, each member shares his/her ideas and proposals, usually one at a time in round-robin fashion, until all of the group’s ideas have been captured in a consolidated list.  Clarifications can be requested to ensure an accurate record of each idea, but critiques and other discussion should not take place during this collection process.  Assigning facilitator and recorder roles can help this process proceed smoothly and efficiently.  See “Meetings:  Now Available in Productive Format!” for additional guidance.
     Once the consolidated list is complete, evaluation of proposals can begin.  Each member can ask questions or voice concerns about any of the ideas recorded.  This can be done by considering each item on the list sequentially, or randomly as thoughts occur to participants, so long as no member or proposal is given short shrift.
     At the conclusion of the discussion, members vote on the most promising alternatives.  If there is a large number of proposals under consideration, the group may narrow the field by voting for a small number to investigate further.  When these alternatives have been further developed – that is, the costs, benefits, and risks are better understood – the group votes again.  The final selection is made according to the decision rule defined at the outset.
     Nominal Group Technique combines independent exploration of a problem with group development to maximize creativity and refinement of its chosen solution.  Independent exploration in advance often results in more and better proposals by allowing each member to choose the best time to ponder the issue.  It also reduces the time required for the group to collect ideas.
 
Delphi Method
     A key premise of the Delphi Method is the anonymity of group members.  Anonymity is maintained in order to prevent some of the pitfalls of group decision-making; tangential discussions, groupthink, deference to authority figures, and domineering personalities are all but eliminated.
     Another key premise is the use of a facilitator through which all communication among members is channeled.  The process begins when the facilitator crafts a questionnaire to elicit input on a specific problem or decision to be made.  The questionnaire is distributed to the group and the responses are aggregated and summarized.  The information collected is then distributed to the group for feedback.  This process is repeated until a consensus or a predefined limit has been reached.
     In addition to the advantages mentioned above, the Delphi Method is also location- and schedule-neutral as no meetings are held.  It may also allow external partners to be engaged in the process, such as suppliers, academics, regulators, or other “experts” whose participation in traditional meetings would not be feasible.
     The Delphi Method is not without its drawbacks, however.  Accommodating each member’s schedule can cause significant delay in reaching a final decision.  Many lament the absence of spontaneous discussion that could lead to new or improved ideas.  Also, the facilitator can influence the decision by inducing bias, inaccurately summarizing member input, or poorly-crafted questionnaires.  Reaching a high-quality decision, in an efficient manner, requires an objective and skilled facilitator.
 
Stepladder Technique
     The Stepladder Technique begins with two “core” team members discussing alternatives before inviting others to join the conversation one at a time.  Members develop ideas independently prior to joining the group discussion.  Each additional member presents his/her ideas to the group and is informed of previous discussions and ideas presented.  The new member adds to the evaluations, discussing the merits of proposals.  The group grows as this process is repeated until all members have presented their ideas and participated in evaluations of all presented alternatives.
     It is easy to see how this technique could become painfully repetitive and time-consuming for the original two members.  Thus, it should only be employed with small groups.  A small group will minimize the tendency of the original members to lose enthusiasm for the process and, thus, effectiveness, or to “anchor” on an early proposal as information overload sets in.  This format is also particularly susceptible to a dominant personality seeking to suppress the ideas of less-assertive members of the group.
     To be sure, the Stepladder Technique has its advantages when well-executed.  Like the Nominal Group Technique, this technique combines individual exploration with group evaluation and development.  As each new member joins, the group gets an infusion of energy and enthusiasm.  New members are uninfluenced by prior discussions and undeterred by previous objections; their fresh perspectives encourage the group to thoroughly consider all of the alternatives.
 
Consensus-Oriented Decision-Making (CODM)
     While the other group techniques discussed can be conducted under any decision rule, the goal of Hartnett’s CODM Model is explicitly predefined.  The model defines a seven-step process that a group can follow to reach a consensus decision.  The steps are summarized below.
(1) Frame the problem.  Define the problem to be solved or decision to be made.  Communicate all information pertaining to the decision to all members of the group.  Include specific issues to be resolved and outcomes to be attained.
(2) Have an open discussion.  All members are invited to share ideas, concerns, and insights on which future discussions will build.  Record all relevant information for future reference.
(3) Identify underlying concerns.  Identify all stakeholders affected by the decision and the concerns of each for the decision or outcomes.
(4) Develop proposals.  Building on information gathered in previous steps, the group develops proposals that address stakeholders’ concerns while solving the initial problem.
(5) Choose a direction.  From the pool of alternatives developed in Step 4, select the most promising proposal or preferred aspects of multiple proposals to be combined in the final solution.
(6) Develop a preferred solution.  Build the components chosen in Step 5 into a fully-developed solution.  Verify that the solution remains focused on the original problem stated and that stakeholders’ concerns have been addressed.
(7) Close.  Confirm that the group remains in consensus, each member supporting the final decision.  Execute and monitor implementation of the decision, making adjustments as needed.
     The hallmark of Hartnett’s CODM Model is that it draws attention to stakeholders and their concerns.  The other techniques discussed can provide for stakeholder consideration, but they do not give explicit treatment as this model, justifiably, does.
 
     Brainstorming and its variations are often included in discussions of decision-making techniques.  (The Charette Procedure, for example, defines how large groups can be divided into smaller brainstorming groups to be more effective.)  However, no decisions are made during brainstorming sessions; commonly-accepted rules of brainstorming prohibit it!  It is a very useful component of many decision-making processes, so long as it is recognized as an idea-generation tool that precedes decision-making.
 
     As mentioned in the introduction, other decision-making techniques could be discovered or developed.  Perhaps a hybrid of the techniques discussed would best suit your organization’s needs.  You are free to adopt, adapt, modify, or start anew in order to provide the efficient, high-quality decision-making process that your group needs.
 
     Whether you need a facilitator to guide a decision-making group, to develop a hybrid process, or to ask a question, JayWink Solutions is here to help.  You can get in touch in the comments section, on our contact page, or by phone, text, or email.
 
References
[Link] “Seven Methods for Effective Group Decision-Making.”
[Link] “The Delphi Method.”
[Link] “Hartnett’s CODM Model.”
[Link] “7 Ready To Implement Group Decision Making Techniques For Your Team.”  Anand Inamdar, UpRaise, June 14, 2019.
[Link] “Make Up Your Mind.”  Johanna Rusly, Quality Progress, November 2017.
[Link] “Group Decision Making.”

 
Jody W. Phelps, MSc, PMP®, MBA
Principal Consultant
JayWink Solutions, LLC
jody@jaywink.com
]]>
<![CDATA[Making Decisions – Vol. IV:  Fundamentals of Group Decision-Making]]>Wed, 20 May 2020 14:30:00 GMThttp://jaywinksolutions.com/blog/making-decisions-vol-iv-fundamentals-of-group-decision-making     In business contexts, many decisions are made by a group instead of an individual.  The same is true for other types of organization as well, such as nonprofits, educational institutions, and legislative bodies.  Group decision-making has its advantages and its disadvantages.  There are several other considerations also relevant to group decision-making, such as selecting members, defining decision rules, and choosing or developing a process to follow.
     Successful group decision-making relies on a disciplined approach that proactively addresses common pitfalls.  If an organization establishes a standard that defines how it will form groups and conduct its decision-making activities, it can reap the rewards of faster, higher-quality decisions, clearer expectations, less conflict, and greater cooperation.
Why make decisions as a group?
     Group decision-making can be a powerful tool for any organization, if executed properly.  Exploiting its advantages provides benefits that extend beyond the current decision or project to routine interactions and future projects.  These advantages include:
  • Each group member is a source of data and information that can be used to improve decision quality.  If a decision-making group consists of department managers, for example, each acts as a funnel to direct the full intelligence-gathering power of his/her respective department to the group.  Quality decisions require quality inputs.
  • Each member – and his/her support network – also analyzes the information collected by the group, resulting in more-thoroughly vetted, validated, and cleansed information.  Faulty information is less likely to influence the decision.
  • Varied experience of members leads to a broader range of ideas and more creative development of potential solutions.
  • Open discussion of member perspectives can reveal and temper biases that could otherwise unjustifiably influence the decision.
  • Multiple viewpoints provide for more-accurate risk assessments and identification of key challenges relevant to the alternatives under consideration.
  • Broader involvement of members throughout the organization fosters greater understanding of and commitment to chosen strategies.
  • Dispersion of responsibility encourages groups to establish higher performance expectations for the organization that energize employees and fuel improvement.
  • A decision-making group may consist of members that would not otherwise work directly with one another.  The opportunity for these “strangers” to interact can yield a more cohesive team.
  • The process and decision logic is more transparent when arguments are made in an open forum.  This format provides a mentoring opportunity to improve others’ decision-making capabilities through simple observation of group interactions.  To provide an equivalent training experience, an individual decision-maker would have to explain his/her entire thought process in detail.  An individual’s presentation of his/her thought process is notoriously ineffective; it is often an incomplete discussion of tradeoffs and biases are rarely confessed.
     There are also several characteristics of group decision-making that are generally considered to be disadvantages.  It is more useful, however, to frame them as caveats, because the detrimental effects can be avoided if members are aware of the risks and actively engage in countermeasures.  These caveats and countermeasures include:
  • Discussion and debate can be time-consuming.  A group should be convened only to make decisions of sufficient complexity and consequence to warrant the expenditure of resources required to properly execute a group decision-making process.
  • Groups are susceptible to self-induced distractions; discussions may veer off-topic.  Each team member must be willing to identify a discussion thread as tangential to the decision and guide the group to more-relevant topics.
  • The onset of groupthink could nullify the benefits expected from a group process.  One or two members could be assigned the responsibility of “playing devil’s advocate” to encourage robust debate and thorough analysis of alternatives before a decision is made.
  • A dominant personality or a person in a position of authority may unduly influence the decision, creating an illusion of agreement among group members.  Including a moderator or facilitator can ensure that all members’ inputs are considered.  (See Meetings:  Now Available in Productive Format! for additional guidance.)
  • Dispersion of accountability for a decision or its consequences may reduce members’ interest, commitment, or effort.  Only those to whom the decision is significant should influence it.
  • There is potential for conflicting agendas among group members to divert attention from the organization’s best interests.  If any member attempts to unfairly sway a decision solely for self-serving purposes that are inconsistent with organizational objectives, that member should be removed from the group.  This includes providing misleading information to the group, or withholding information, in order to favor one’s own position.
     Note that the dispersion of responsibility and accountability are opposite sides of the same coin.  Whether dispersion becomes an advantage or disadvantage depends, in large part, on the makeup of the group and the character of its members.  Similarly, the potential for groupthink or a domineering personality to result in a suboptimal decision is strongly influenced by the selection of group members.  Because it is a pervasive theme and a critical consideration in group decision-making, member selection is the topic to which we now turn.
 
Who should be in the decision-making group?
     Decision quality and efficiency will depend on many factors; chief among them is the membership roster of the decision-making group.  This is true because the groups’ membership has a compound effect due to their influence on all other relevant factors.  Jim Collins’ business leadership metaphor, from his book Good to Great, is equally fitting for decision-making groups at any level of an organization, from front-line operations to executive leadership:
     “Get the right people on the bus, the wrong people off the bus, and the right people in the right seats.”
     Whether or not a person should become a member of a group tasked with making a specific decision depends on two factors:
(1) the person’s expertise in subjects relevant to the decision, and
(2) the impact the decision will have on the person’s area of responsibility within the organization.
One method of evaluating these factors for each potential group member is to use the Hoy-Tarter Model.  This model frames the two factors as questions to which each candidate receives (or submits, in the case of self-assessment) an answer of “yes” or “no” to generate a 2 x 2 matrix as shown in Exhibit 1.  Our formulation uses the following questions:
  • “Is the decision relevant and significant to this individual?” on the horizontal.
  • “Does this individual possess requisite expertise?” on the vertical.
Each candidate is placed in the quadrant corresponding to his/her answers.  Accuracy of answers should be verified, particularly when attained via candidate self-assessment, to prevent inappropriate influence of a decision and shirking.  Those that answered “yes” to both questions (a yes/yes response) are placed in the upper left quadrant of the matrix and invited to join the group; these are the “core” members.  Those with no/no responses are placed in the lower right quadrant and dismissed from further participation.  The remaining two quadrants – the “contingent” members – require additional consideration to finalize the group roster.
     The lower left quadrant is for yes/no responses.  To determine if these candidates should join the group, ask an additional question:  “Are this person’s interests adequately represented by other members of the group?”  If so, this person can be dismissed; if not, s/he may be invited to join the group as a full member or in limited capacity.
     The upper right quadrant is for no/yes responses.  The additional question for this subset is “Will this person’s expertise duplicate that of other members?” or, more simply, “Is there sufficient expertise in the group?”  If the candidate offers no unique expertise s/he can be dismissed.  However, if the person possesses requisite skills not yet available within the group, s/he should be invited to join.
     An additional consideration for the contingent members is the size of the group.  If the group grows larger than the scale and scope of the decision warrants, some or all of the contingent members may be dismissed to maintain the group’s agility and efficiency.  In fact, the core membership should also be reduced if the group is larger than the decision warrants.  Consider likely redundancies in members’ contributions, members’ total workload, and other relevant factors when finalizing the roster.  Candidates in each quadrant can be ranked during the initial evaluations to speed the final selection process should the group’s membership need to be pared.
 
     The Hoy-Tarter Model helps get the right people on the bus and the wrong people off the bus.  If the group needs help getting people in the right seats, there are tools available to help with this as well.  One such tool is Bain’s RAPID Framework.  RAPID is an acronym for the roles assigned to group members.  Exhibit 2 contains a description of each role in the framework, the quadrants in the Hoy-Tarter Model to which each role typically corresponds, and example functions that commonly serve in each role.
Those assigned the Perform role may change as expectations of the expertise required to implement a decision evolve.  Membership in the Decide subgroup will be determined by the decision rule used.  This is the subject explored next.
 
Who should make the final decision?
     The person(s) responsible for making the final decision is defined by the decision rule under which the group operates.  Several common decision rules are described below.
  • Unilateral:  A single person has the authority to impose a decision.  This person may be the designated decision-maker due to superior expertise and insight or by being the highest-ranking officer of the organization present.
  • Executive Committee:  After receiving all available information, including support for and objections to alternatives, risk assessments, financial projections, and so on, a subset of members impose a decision.  The executive committee is usually comprised of a small number of members to allow a conclusion to be reached quickly and efficiently.
  • Plurality:  The alternative for which the largest number of group members vote is selected.  When members must choose from more than two alternatives, a plurality rule may be necessary to ensure that a decision is reached; a majority of votes may not be received by any alternative.
  • Simple Majority:  More than half of the group’s members must vote in concurrence for a single alternative.  Often used without the qualifier, i.e. “majority.”
  • Super Majority:  Significantly more than half of the group’s members must vote in concurrence for a single alternative.  Two-thirds and ¾ are common super majority requirements, but the organization can define the requirements as it sees fit.
  • Unanimity:  All group members must vote in concurrence for a single alternative; the most-extreme super majority requirement (100%) possible.
  • Consensus:  All members commit to supporting a decision and aiding execution of a plan created by the group by means of alternative development and compromise.  Disagreement on aspects of the decision may remain, but an outward appearance of unanimity is created as the group agrees to present a unified front.
     Variations on the decision rules described could also be implemented.  For example, large groups may consist of voting members (core group members, say) and non-voting members (Input role-players, perhaps).  The roster of voting members will be larger than an executive committee, but may eliminate extraneous votes, allowing a more expeditious conclusion to be reached without sacrificing decision quality.  The voting members are then subject to the chosen decision rule (plurality, unanimity, etc.).  Other decision rules could also be crafted to suit an organization’s unique needs.
 
How should a decision-making standard be structured?
     There is no single “correct” format for documenting decision-making guidelines.  The document should be consistent with the communication style used by the organization for other standards.  It should also be subject to similar review and revision procedures.
     The content of the standard will also vary among organizations, though the inclusion of certain types of information should be considered a basic requirement.  A discussion of these “basics” follows.
The process used to form decision-making groups.  The Hoy-Tarter Model has been presented as one option; other methods could also be used.  A simple “at leader’s behest” could also be specified, though any organization choosing this unstructured method should be aware of its limitations.  Namely, favoritism and reinforced biases may result in a predetermined outcome, while relevant information and affected individuals are indiscriminately excluded.
Tools and techniques to be used.  Recommendations and/or requirements for the use of specific decision-making aids favored by the organization should be identified.  Any that are in disfavor should also be noted to prevent rejection of a group’s analysis.  Circumstances that require or prohibit use of a specific tool should also be identified (more on this in the final point).
The decision rule.  Decision rights must be defined to establish a shared understanding of responsibilities within the group.
A “Hung Jury” clause.  If a group is unable to reach a decision by ordinary means, a process for resolving the impasse should be predefined.  For example, a group operating under a consensus rule that is unable to reach consensus in a reasonable period of time (also requiring definition) may be directed to an arbitrator for resolution.  In another example, a group requiring a majority vote is evenly split.  In this case, a higher-level manager could be designated to provide the tie-breaking vote.  This stipulation should be included for each type of decision rule available to the organization that could result in deadlock.  This clause is not required for unilateral decisions, for example, as a “tie” is impossible.
Any criteria that change the process requirements.  Application of the “basics” discussed could vary from one decision to another within an organization.  Financial implications, the anticipated duration of a project, the range of expertise required, or other factors may influence an organization’s approach to a decision.  For example, a short-term project with a small budget may be executed within a single department.  Thus, unilateral decisions by the department manager, reached with minimal formality, may be acceptable.  However, a long-term project that affects several departments and requires substantial capital expenditure may warrant a thorough financial and environmental analysis that involves upper management.  Predefined thresholds for these requirements will ensure that accepted practices are followed, minimizing the number of decisions that stall.  Restarting a decision process causes duplication of effort and reduces the organization’s efficacy.
     Decision-making guidelines could also reference requirements for meetings, progress updates to management, or anything else the organization deems appropriate and valuable.  Larger organizations are likely to include more detail, while smaller ones may require more time to develop their standards fully.  With periodic review, standards and guidelines evolve and improve to meet the needs of an organization as it matures.
 
            For customized guidance, contact JayWink Solutions for a consultation.  General questions can be left in the comments section below.  As always, you’re invited to let us know how we can assist you and your team.
 
References
[Link] “Seven Methods for Effective Group Decision-Making.”
[Link] “The Hoy-Tarter Model of Decision Making.”
[Link] “Bain’s RAPID Framework.”
[Link] “7 Ready To Implement Group Decision Making Techniques For Your Team.”  Anand Inamdar, UpRaise, June 14, 2019.
[Link] “17 Advantages and Disadvantages of Group Decision Making.”
[Link] “Managing Group Decision Making.”
[Link] “Group Decision-Making : it’s Advantages and Disadvantages.”  Smriti Chand, Your Article Library.

 
Jody W. Phelps, MSc, PMP®, MBA
Principal Consultant
JayWink Solutions, LLC
jody@jaywink.com
]]>
<![CDATA[Making Decisions – Vol. III:  Analytic Hierarchy Process]]>Wed, 06 May 2020 14:30:00 GMThttp://jaywinksolutions.com/blog/making-decisions-vol-iii-analytic-hierarchy-process     While the Rational Model provides a straightforward decision-making aid that is easy to understand and implement, it is not well-suited, on its own, to highly complex decisions.  A large number of decision criteria may create numerous tradeoff opportunities that are not easily comparable.  Likewise, disparate performance expectations of alternatives may make the “best” choice elusive.  In these situations, an additional evaluation tool is needed to ensure a rational decision.
     The scenario described above requires Multi-criteria Analysis (MCA).  One form of MCA is Analytic Hierarchy Process (AHP).  In this installment of “Making Decisions,” application of AHP is explained and demonstrated via a common example – a purchasing decision to source a new production machine.
     Before embarking on the Analytic Hierarchy Process example, it is important to note that AHP was originally developed with matrix notation; calculations were performed with matrix algebra.  To make this decision-making aid more accessible, AHP will be executed here with tables and basic mathematics in lieu of matrix operations.  Due, in part, to the simplified mathematics, the process, at first glance, may seem long and tedious.  Do not be discouraged!  Though the presentation may seem lengthy, the process is easy to follow and simple to implement, particularly when using a spreadsheet to perform the required calculations.
 
Developing a Hierarchy
     The first step of the decision-making process, as always, is to define the decision to be made.  In AHP, the decision scenario is represented by a hierarchy.  The decision hierarchy for our example, shown in Exhibit 1, consists of three levels.  Level 1 is the goal, objective, or purpose of the decision.  The goal of our example is summarized as “Buy Widget Machine.”  The long form of this objective statement may be “Choose source for purchase of new production machine for Widget Line #2.”  If the objective is understood by all involved, the summary statement is sufficient; if there is concern of confusion, use a more detailed statement.
     Level 2 is populated with the criteria deemed relevant to the decision.  In our example, machines will be evaluated on the dimensions of cost, productivity, and service life.  Additional criteria could have been added, as well as additional levels of analysis.  For example, the cost dimension could have been split into sub-criteria such as purchase price, maintenance cost, and disposal cost.  However, we will forego the additional complexity in our example, as it may be a bit overwhelming in one’s first exposure to AHP.
     Level 3 identifies the alternatives to be considered.  For our example, let’s assume that RFPs (requests for proposal) have been sent to Jones Machinery, Wiley’s Widget Works, and Acme Apparatus Co.  At this stage, only the potential sources are known; it is best if the content of the proposals has not yet been revealed to evaluators.  A “blind” process limits the bias affecting the criteria evaluations.
     Additional alternatives could also have been included in the analysis.  An analysis with the minimum number of levels, with three evaluation criteria and three alternatives (a “3 x 3 decision matrix”) was chosen for balance.  The analysis is sufficiently complex to demonstrate the value of AHP, but not so complex as to overwhelm those unfamiliar with it.  The process followed here is appropriate with as many as ten criteria and alternatives (10 x 10 matrix).  Larger decision matrices require adjustments that are beyond the scope of this discussion.
 
Evaluating Criteria
     Once the decision hierarchy has been established, the analysis can begin in earnest.  This begins with pairwise comparisons of the evaluation criteria to quantify the relative importance of each.  Comparisons are “scored” on Saaty’s Pairwise Comparison Scale, shown in Exhibit 2.  Scores are usually odd numbers, ranging from 1 to 9.  If greater discrimination or compromise is required, even numbers (from 2 to 8) can be used.  Maximum discrimination is achieved using decimal scores, but the additional effort adds value only in the most extraordinary circumstances.  Ours is a straightforward example, requiring only odd integer scores.
     The pairwise comparisons of evaluation criteria for our example are shown in Exhibit 3.  To complete the table, only the three shaded cells in the upper right require entries.  The diagonal is always populated with “1.000,” as each criterion compared with itself must be of equal importance.  Entries in the lower left section of the table are calculated automatically as the reciprocals of those in the upper right (mirrored across the diagonal), as the order of comparison is reversed.  Sum each column of the completed comparison table.
To ensure that the table is populated correctly, follow the guidelines provided in Exhibit 4.
     The reasoning behind each of the pairwise comparison “scores” is given in Exhibit 5.  Presenting this information is not considered a requisite part of AHP, but it is a useful one.  This small addition provides a valuable reference for the inexperienced or anyone reviewing prior decisions in order to improve the quality of future decisions.
     Next, we will create another table, the normalized matrix, and calculate a priority value for each criterion.  To normalize the matrix, the value in each cell is divided by its column total.  The normalized COST/COST cell value is 1.000/15.000 = 0.067; the normalized PRODUCTIVITY/COST value is 9.000/15.000 = 0.600, and so on.  The normalized matrix is shown in Exhibit 6.  The final column of the table shows the priority of each criterion, calculated by averaging the normalized values in each row.  The priority values reflect the relative importance of each criterion.  The higher the priority value, the more important the criterion to the decision; the greater the difference between two priority values, the greater the relative importance of the higher-priority criterion.  If calculated correctly, the sum of priorities equals unity.
     The results obtained thus far are often presented in the format shown in Exhibit 7, referred to here as the standard presentation of results.  It is simply a composite of previous tables; the original comparison “scores” from Exhibit 3 are shown with the criteria priorities calculated in Exhibit 6.
Checking Consistency
     To ensure that the analysis and, thus, the decision-making process, is progressing rationally, we check the consistency of the criteria evaluations.  In this section, thorough derivations of the calculations will not be presented.  In-depth knowledge of the derivations are not required for successful use of AHP and are beyond the scope of this post.  More information can be found in the references cited at the end of the post.
     In its simplest form, a consistency check can be formulated in the following way:  “Verify that a > c when it has been determined that a > b and b > c.”  Extending this formulation slightly, to account for the degree of consistency, we may replace “a > c” with “a >> c” in the previous statement.
     To “check consistency” is synonymous with “calculate Consistency Ratio (C.R.) of comparison table.”  To do this, start with the standard presentation of Exhibit 7.  Multiply each “score” by the priority corresponding to the criterion in its column.  That is, multiply each cell in the COST column by the COST priority (0.064), multiply each cell in the PRODUCTIVITY column by the PRODUCTIVITY priority (0.669), and multiply each cell in the SERVICE LIFE column by the SERVICE LIFE priority (0.267).  The table now consists of weighted columns.  Sum each row in the WEIGHTED SUM column, as shown in Exhibit 8.
     Our next step is to calculate λmax (“lambda max” – see references for details), the average of the ratios of each criterion’s weighted sum (from Exhibit 8) to its priority (from Exhibit 7).  The calculation of λmax for the criteria evaluations is shown in Exhibit 9.
     The Consistency Index (C.I.) is now calculated, as shown in Exhibit 10, where N is the number of criteria included in the analysis.  Finally, the Consistency Ratio is calculated as the ratio of the Consistency Index to the Random Consistency Index (R.I.) found in Exhibit 11 (see references for details).
     An established guideline sets a threshold value of C.R. at 0.100.  A set of comparisons with C.R. ≤ 0.100 is deemed consistent and a valid result can be expected.  For C.R. > 0.100, decision-makers should consider revisiting the pairwise comparisons, adjusting evaluation “scores” to achieve greater consistency.  Evaluating a large number of criteria may warrant accepting a higher C.R. due to the complexity inherent in analyzing many interrelated characteristics simultaneously.  If any criteria are found to be dependent on others (see mutual independence in glossary, Vol. I), those criteria should be eliminated (or combined) to simplify the analysis.  Also, aggregating evaluations from members of a decision-making group may induce a higher level of inconsistency; accepting a higher C.R. may be necessary to advance the decision-making process.
     The consistency ratio calculation for our example is shown in Exhibit 12.  The result is color-coded according to the threshold discussed above.  Our C.R., at 0.025, is significantly below the threshold value, indicating a consistent, valid analysis.  This level of consistency can be expected for relatively simple hierarchies like our example.
Comparing Alternatives
     With a validated, consistent criteria evaluation, the analysis can now incorporate information about potential alternatives.  Responses to our three RFPs are summarized in Exhibit 13, known as the Performance Matrix.  Before beginning the comparisons, it should be verified that each alternative meets all minimum performance criteria established.  Any that do not should be removed from the analysis for two reasons:
(1) Fewer alternatives require fewer calculations and less time to complete the analysis.
(2) The compensatory nature of AHP may result in misleading rankings.  It is possible for excellent performance in several dimensions to earn an alternative a favorable ranking despite its disqualifying performance in one criterion, invalidating the results.
For purposes of our example, we will assume that no such disqualification conditions exist.
     The information presented in the performance matrix is used to conduct pairwise comparisons of alternatives following the same process as the criteria evaluations.  A set of tables (or matrices), comparable to that created for the criteria comparisons, will be generated for each criterion.  The process followed each time is the same as above; therefore, it will be presented here with much less detail.
     First, each potential source (alternative) will be compared on the dimension of COST, as shown in Exhibit 14.  Like before, only the upper right of the table requires values to be entered.  To “score” each comparison, use the scale in Exhibit 2 and guidelines in Exhibit 4, focusing on “preference” instead of “importance.”  For example, Jones Machinery’s lower price earns it a moderate preference (3.000) to Wiley’s Widget Works and a very strong preference (7.000) to Acme Apparatus Co.  Wiley’s intermediate price earns it a strong preference (5.000) to Acme.
     The pairwise comparisons are normalized and priorities calculated as shown in Exhibit 15.  The standard presentation of results is provided in Exhibit 16.
     Consistency of the evaluations must be checked, following the same process detailed for the criteria evaluations.  The consistency check for the cost comparisons is shown in Exhibit 17.
     Repeating the process for the two remaining criteria yields a similar set of tables and calculations.  Comparisons of alternatives and calculation of PRODUCTIVITY priorities are shown in Exhibit 18.  The consistency check for PRODUCTIVITY comparisons is shown in Exhibit 19.
     Comparisons and calculations with respect to SERVICE LIFE are shown in Exhibit 20 and the corresponding consistency check in Exhibit 21.  Note that the simplicity of the SERVICE LIFE comparisons resulted in a C.R. of 0.000, or “perfect” consistency.
Synthesizing the Model
     The work of AHP culminates in model synthesis, where the question “which machine should we buy?” is finally answered.  To calculate each alternative’s overall priority, we begin with the Local Priorities Table of Exhibit 22.  This table simply compiles the previously-calculated priorities of each alternative with respect to each criterion (Exhibits 16, 18, 20).
Multiply each cell by the priority of its corresponding criterion, shown in Exhibit 7.  These priority values are also called criteria weights, as referenced in Exhibit 23 with the results of this step.  Adding the values in each alternative’s row gives its OVERALL PRIORITY.  If all calculations have been performed correctly, the sum of the overall priorities will equal unity.
     The alternative with the highest overall priority is the preferred option.  In our example, the higher productivity and longer service life offered by the Acme machine more than offset its higher price.  At the opposite end of the spectrum, Jones’ low price does not compensate for its deficiencies in productivity and service life.  Thus, despite its cost advantage, Jones receives the lowest preference ranking.
 
Analyzing Sensitivity
     In the “Checking Consistency” section above, we acknowledge and accept that subjective evaluations may preclude a “perfect” analysis.  We simply conduct a test to ensure that the imperfections are acceptable.  Similar logic spurs us to conduct a sensitivity analysis of the final results.  Inconsistency or compromises in evaluations, such as those reached in a group decision-making process, could alter the outcome of an analysis.
     For our example, we consider four scenarios.  The first scenario is the original analysis results, repeated for side-by-side comparison.  The second – equally-weighted criteria – is a commonly used scenario; it is shown in the template as a “standard test.”  Each scenario is accompanied by notes describing its significance – changes in preference rankings or other insights.  The first two scenarios are shown in Exhibit 24.  Equally-weighted criteria cause no change in preference rankings.  Trial-and-error calculations reveal that reducing the PRODUCTIVITY weight as low as 0.300, with the others weighted equally, has no impact on the preference rankings.
     The final two scenarios considered for our example are “wild cards.”  Any combination of criteria weights (that sum to unity) that seem plausible can be evaluated.  Typical areas of consideration include the impacts of changes in operating philosophy (e.g. cost focus vs. performance optimization) and the influence on the final decision of compromises made throughout the process.  The third scenario in our example posits a cost focus; COST is given twice the weight of each of the other criteria.  In this situation, the preference rankings are reversed from the analysis.  However, the degree of preference is very small compared to the original decision.
     The fourth scenario considered maintains the productivity focus of the original evaluations, but reverses the weights of the remaining two criteria.  No change in preference rankings occur due to the relative “power” of the PRODUCTIVITY weight.  The final two scenarios are shown in Exhibit 25.
     Any number of scenarios can be explored in a sensitivity analysis.  The number of scenarios and the weighting combinations used will be directed by the decision environment.  The less homogeneity there is in group evaluations, for example, the more scenarios that may warrant review.  The ability to easily iterate scenarios, or conduct trial-and-error explorations for significant scenarios, is the primary advantage of using a spreadsheet; automated calculations drastically reduce the time required to perform AHP.
 
Considering All Things
     All tools come with advantages and disadvantages.  Only practical concerns will be mentioned here, leaving the more theoretical and abstract discussions to the academics (see references for more information).
     AHP is a useful tool for “every-day” decision-makers.  It is quite accessible in the non-matrix form presented here, though simplifying the mathematics relegates its status to approximation.  However, approximation with greatly reduced effort is something to be applauded, especially in the case of AHP.  It seems that only hardcore mathematicians find the slight reduction in accuracy disturbing; the average practitioner need not be concerned with it.
     A characteristic of AHP that may be of concern to practitioners is the potential for rank reversal.  Adding an alternative to the analysis may cause the preference rankings of two other alternatives to be reversed in the expanded analysis.  If the reversed preferences are of sufficiently low rank, or otherwise do not affect the final decision, it is probably safe to ignore the reversal.  Sensitivity analysis can be used to gain further insight and make this determination.
     If the reversal has a significant impact on the final decision, and sensitivity analysis does not provide sufficient insight to address the situation, AHP should be supplemented with – or replaced by – other decision-making techniques.  If it can be resolved with sound judgment and reasoning, do so.  After all, AHP is only an aid; judgment and reasoning should always be applied, no matter the output of the model.
     If more drastic measures are needed to finalize a decision, there are many other decision-making tools and models to consider.  Future installments of “Making Decisions” will present some of them.
 
     If you have questions or feedback, feel free to post in the comments section below or contact JayWink directly.
 
     For a directory of “Making Decisions” volumes on “The Third Degree,” see “Vol. I:  Introduction and Terminology.
 
References
[Link] “Multi-criteria analysis:  a manual.”  Department for Communities and Local Government, London, January 2009.
[Link] “Guidelines for applying multi-criteria analysis to the assessment of criteria and indicators.”
[Link] “Multicriteria Decision Methods:  An Attempt to Evaluate and Unify.”  Keun Tae Cho; Mathematical and Computer Modelling, May 2003.
[Link] Multi-criteria Analysis in the Renewable Energy Industry.  J.R. San Cristobal Mateo, 2012.
[Link] “A Straightforward Explanation of the Mathematical Foundation of the Analytic Hierarchy Process (AHP).”  Decision Lens.
[Link] “Analytic Hierarchy Process (AHP) Tutorial.”  Kardi Teknomo, 2006.
[Link] “Application of the AHP in project management.”  Kaml M. Al-Subhi Al-Harbi, International Journal of Project Management, 2001.
[Link] “Decision making with the analytic hierarchy process.”  Thomas L. Saaty, International Journal of Services Sciences, 2008.
[Link] “How to make a decision:  The Analytic Hierarchy Process.”  Thomas L. Saaty, European Journal of Operational Research, 1990.
[Link] “The Analytic Hierarchy Process – What It Is and How It Is Used.”  R.W. Saaty, Mathematical Modelling, December 1987.
[Link] Practical Decision Making using Super Decisions v3.  E. Mu and M. Pereyra-Rojas, 2017.
 

Jody W. Phelps, MSc, PMP®, MBA
Principal Consultant
JayWink Solutions, LLC
jody@jaywink.com
]]>
<![CDATA[Making Decisions – Vol. II:  The Rational Model]]>Wed, 22 Apr 2020 14:30:00 GMThttp://jaywinksolutions.com/blog/making-decisions-vol-ii-the-rational-model     The rational model of decision-making feels familiar, intuitive, even obvious to most of us.  This is true despite the fact that few of us follow a well-defined process consistently.  Inconsistency in the process is reflected in poor decision quality, failure to achieve objectives, or undesired or unexpected outcomes.
     Versions of the rational model are available from various sources, though many do not identify the process by this name.  Ranging from four to eight steps, the description of each varying significantly, these sources offer a wide variety of perspectives on the classic sequential decision-making process.  Fundamentally, however, each is simply an interpretation of the rational model of decision-making.
     In an attempt to consolidate and clarify, a seven-step process will be defined here.  The aim is to provide a more serviceable explanation than is typically available, resulting in a clearer understanding of the process that encourages practical application and facilitates consistency.  The JayWink Solutions version of the rational model of decision-making is as follows:

Step 1:  Clearly define the situation.  Describe the problem or opportunity with which you are faced as succinctly as possible without omitting critical information.  Goals and objectives to be achieved and constraints on implementation, including outcomes to be avoided, should be defined.  Consider using “is/is not” statements to add clarity and prevent distraction by peripheral issues.

Step 2:  Define decision criteria and relative influence of each.  List all characteristics of potential solutions that will influence the decision.  For example, when choosing a component or raw-material supplier, cost, distance, delivery performance, and reject rate are all important factors; others could also be considered.
     When all relevant characteristics have been identified, rank the importance of each relative to the others.  The final result will be an ordinal ranking, from most-important to least-important, of the decision criteria.  It is important to establish criteria rankings early in the process to minimize the effects of bias in alternative development and selection.

Step 3:  Develop alternatives.  Potential solutions can be developed by refining previous strategies, modifying existing systems, brainstorming new approaches, or a combination of techniques.  Strive to strike a balance between gathering as many perspectives and alternatives as possible and expending an appropriate amount of resources.

Step 4:  Evaluate alternatives.  Rate each alternative developed in step 3 on each criterion identified in step 2.  Estimates, forecasts, or other forms of prediction may be necessary to assign values for comparison.  Qualitative or pseudo-quantitative comparison may also be needed, such as creating a Likert-type scale on which each alternative’s expected performance is rated.
     When concrete values are not available, substitutes must be generated as objectively as possible.  Allowing bias to influence intermediate steps will lead the process to a foregone conclusion – that which would have been reached without the pretense of a defined process.  The source of this bias is often a person in a position of authority seeking a predetermined outcome s/he views favorably.  Excessive bias at any stage can invalidate the process, causing practitioners to question its value and abandon it.
     A lack of concrete values also increases the risk of unexpected outcomes or failure to meet objectives.  While beyond the scope of this post, decision-makers should always be cognizant of the level of risk one is assuming.

Step 5:  Select alternative.  Compare alternatives according to the criteria rankings established in step 2.  If one alternative is found to be superior in all relevant characteristics, it is a “clear winner” or the dominant option.  If this is not the case, some judgment may be needed to finalize the selection.
     Consider the following scenario:  of four criteria, alternative A is rated best on the most important, while alternative B rates better on the remaining three.  Does alternative B’s higher rating on three criteria outweigh alternative A’s higher rating on the most important criterion?
     This determination cannot be made in the abstract.  Familiarity with the situation, the magnitude of the ranking preferences (cardinal ranking), the magnitude of each alternative’s “superiority” in each criterion, and other information is necessary to make an appropriate decision.  The example is provided to reinforce understanding that this process will not always “make the decision for you” by making the best choice obvious.  It will, however, structure the relevant information in such a way that enables a logical and defensible – that is, rational – decision to be made efficiently.

Step 6:  Implement the decision.  Some decisions may be implemented by completing a single action.  Others may require extensive planning and expenditure of resources, justifying the assignment of a project manager and use of the detailed procedures and record-keeping that a trained PM imposes on projects.  The application of judgment will guide implementation at a level of complexity appropriate to the situation and solution chosen.

Step 7:  Evaluate results.  Conduct a “lessons learned” exercise to improve future decision-making performance.  Consider all aspects of the process, including the individuals that were and were not involved, the amount of research and time dedicated to it, the effectiveness of brainstorming sessions, the pervasiveness of bias, and myriad other factors that influence the decision-making process.
     Also, review the outcomes generated by the decision, including the achievement of objectives, occurrence of unexpected or undesired outcomes, organizational support of the decision, and any other impacts related to the decision under review.  Correlate each outcome with the characteristics of the process that were most influential.  Identifying the connection between the process and the outcome will encourage repetition of activities that generated positive outcomes and facilitate improvement of less-successful activities and the process as a whole.
     Neglecting this final step is a “deadly sin” that permits team members to abandon structured processes and return to “shooting from the hip.”  The review helps practitioners recognize the causes of suboptimal performance and opportunities for improvement.  The ability to visualize – and effectuate – improving performance in both process execution and outcomes is the driving force behind consistent application of the model.
 
Return to “Why”
     The seven steps described above explain how to use the rational model, but only begin to explain why it is advantageous to do so.  To understand why one would choose to use the model, we will consider the consequences of an undefined decision-making process, including:
  • Relevant information may be overlooked or undefined, leading to an uninformed (“random”) decision.
  • Available information may not be thoroughly analyzed for accuracy, objectivity, and relevance.
  • Extraneous information may not be filtered out, leading to an irrational decision.
  • An individual decision-maker, or influential member of a group, may “anchor” on a predetermined solution, rejecting valid, but unquantified, arguments for competing alternatives.
  • Unclear or undefined responsibilities may cause progress to stall as confusion about the status of the decision permeates the group.
  • Instead of moving forward as a coordinated group, several people may each make an independent decision about the same situation, using different information and methods of analysis.
  • Consensus may be impossible due to differing perceptions of the situation, objectives, constraints, or feasibility of proposed alternatives in the absence of shared definitions.
  • No logical progression will be traceable to defend a decision or to learn from it.
 
In Times of Crisis
     It is a common refrain that standard procedures go “out the window” in times of crisis.  This does not have to be the case and it should not be.  In fact, it is in times of crisis that structure is needed most; emotions run high and decision-makers are more prone to irrationality.  It is true, however, that this process will be compressed in a number of ways.  Fewer people will be involved, less data analyzed, fewer alternatives developed, and to a lesser degree.  The decision criteria and ranking may exist exclusively in the mind of a single decision-maker who rapidly processes the available information to select an alternative.  Implementation may also be subject to less-rigorous planning and documentation requirements.
     The first six steps of the decision-making process may be abbreviated, but the seventh should not.  After the crisis, the “lessons learned” activity should take place as it does for other decisions.  Decision-making in time-critical and crisis situations can be improved by the same method used to review and improve routine decisions.
 
     The rational model is just one of many approaches to decision-making.  Future installments of “Making Decisions” will present other models, tools, and techniques to enhance your decision-making skillset.  Among others, accounting for the strength of preferences and addressing risk and uncertainty are anticipated topics of discussion.
 
     Send specific questions to JayWink via the Contact page or in the comments section below.  We look forward to assisting you and your organization.
 
     For a directory of “Making Decisions” volumes on “The Third Degree,” see “Vol. I:  Introduction and Terminology.
 
References
[Link] “8 Steps to Making Better Business Decisions.”  Richard Lannon, BA Times, July 31, 2012.
[Link] “An Overview of Decision-Making Models.”  Hanh Vu, ToughNickel, February 23, 2019.
[Link] “The different decision-making models you need to know — and their pros and cons.”  Gazprom Marketing & Trading.
[Link] “7 Steps to Effective Decision Making.”  UMass Dartmouth.
 

Jody W. Phelps, MSc, PMP®, MBA
Principal Consultant
JayWink Solutions, LLC
jody@jaywink.com
]]>
<![CDATA[Making Decisions – Vol. I:  Introduction and Terminology]]>Wed, 08 Apr 2020 14:30:00 GMThttp://jaywinksolutions.com/blog/making-decisions-vol-i-introduction-and-terminology     Given the importance of decision-making in our personal and professional lives, the topic receives shockingly little attention.  The potential consequences of low-quality decisions warrant extensive courses to build critical skills, yet few of us ever receive significant instruction in decision-making during formal education, as part of on-the-job training, or from mentors.  It is even under the radar of many conscientious autodidacts.  The “Making Decisions” series of “The Third Degree” aims to raise the profile of this critical skillset and provide sufficient information to improve readers’ decision-making prowess.
     It is helpful, when beginning to study a new topic, to familiarize oneself with some of the unique terminology that will be encountered.  This installment of “Making Decisions” will serve as a glossary for reference throughout the series.  It also provides a preview of the series content and a directory of published volumes.
Glossary
active postponement:  intentional delay of a decision or action in order to gather additional information, accommodate changing circumstances, etc.; delay defined in time, scope, and purpose.
analysis paralysis:  a condition in which an individual or group feels overwhelmed by the information presented, is incapable of processing it fully or filtering the irrelevant, resulting in the inability to reach a conclusion or make a decision.  Also paralysis by analysis.
Analytic Hierarchy Process (AHP):  a method of multi-criteria analysis that uses pairwise comparisons and weighting of criteria to quantify alternative preferences.
Arrow’s Impossibility Theorem:  economist Kenneth Arrow concluded that no voting system exists that satisfies all fairness criteria.
Bain’s RAPID Framework:  a tool developed by the Bain & Company consulting firm to assist decision-making groups in assigning roles to its members.  RAPID is an acronym for these roles:  Recommend, Agree, Perform, Input, and Decide.
bounded rationality:  limits on the ability to be rational, as defined by a decision-maker’s cognitive ability and applicable experience, available information, and time constraints.
brainstorm:  conduct an idea-generation exercise within a group that prohibits criticism and encourages creativity.
cardinal ranking:  ordering items – alternatives, criteria, etc. – by preference or importance, identifying the magnitude, or degree, of preferences.  Item A is 1.5x better than item B or item C is much more important than item D, for example.
compensatory assessment:  a technique used to assess alternatives in which lower performance on one criterion can be offset by higher performance on one or more criteria.
Condorcet winner:  a candidate that wins pairwise elections versus all other candidates.
consensus:  the state reached by a group in which all members have committed to implement a decision, though disagreement about some aspects may remain; the goal of a compromise.
consequence table:  See performance matrix.
cost/benefit analysis:  a method of assessing a course of action based on the ratio of the cost to implement to the benefits expected to result from implementation.
cost-effectiveness analysis:  a method of comparing alternative courses of action, each achieving the same objective, based on the relative costs of implementation of each.
decision:  a clearly pronounced choice of action or behavior.
decision environment:  the circumstances under which a decision is to be made; the aggregate of objectives, preferences, alternatives, and information available to the decision-maker.
decision-making:  the act or process of choosing among alternatives, whether individually or in a group.
decision quality:  the extent to which relevant information and alternatives have been considered and the influence of such on the decision rendered; the extent to which a decision achieves expected outcomes.
decision rights:  authority to make a decision on a specific matter.
decision rule:  definition of how a final decision will be reached, including voting or decision rights of group members and number or percentage of voting members that must agree to implement a decision.  Examples include unilateral (leader decides), plurality (highest number of votes), simple majority (more than half of votes), and unanimity (all in agreement).
decision theory:  prediction of behavior that will result in best outcome, often by assigning probabilities and numerical values to possible outcomes.
decision tree:  a graphical representation of choices (decisions), chance (probabilities), and expected payoffs for a series of events.
devil’s advocate:  an individual tasked with raising questions about a proposed solution to ensure that it is fully vetted before selection and implementation.
dispersion of responsibility/accountability:  a characteristic of group decision-making that requires members to share responsibility for a decision and accountability for consequences of that decision, theoretically reducing the individual burden of each member.
dominant (alternative):  an alternative that is rated higher than (is preferred to) another on at least one criterion and at least as high (indifferent or preferred) on all other criteria.
expected utility:  the product of an outcome’s utility and its probability of occurrence; the sum of the expected utilities of all possible outcomes defined for a decision.
fairness criteria:  standards against which voting systems are evaluated.
financial analysis:  assessment of revenues and costs resulting, or expected to result, from the implementation of a decision.
group decision-making:  any decision-making process for which two or more people share responsibility for collecting and analyzing information, developing alternatives, selecting and implementing a solution.
groupthink:  a condition in which group members stop critically analyzing alternatives or information provided to the group for purposes of decision-making; becoming highly agreeable without rational justification.
Hoy-Tarter Model:  a tool used to aid selection of members to a decision-making group.  Each candidate is categorized in a 2 x 2 matrix by assessing his/her expertise and the likely impact of the decision to be made on his/her area of responsibility.
illusion of agreement:  the erroneous perception of group members, or its leader, that there is concurrence among members, usually occurring when members stop offering countervailing arguments because they have deemed it futile or professionally perilous to do so.
irrational decision:  a decision made using extraneous information, such as personal, political, or social considerations.
manipulation:  a practice of strategic voting in which voters rank candidates differently than their true preferences in an attempt to bias the election toward their first-choice candidate.
multi-criteria analysis (MCA):  a set of decision-making tools used when a decision involves multiple, often competing or contradictory, objectives.
mutual independence:  a condition in which all criteria preferences can be rated without knowledge of any other criteria preference ratings; i.e. the rating of one criterion does not depend upon or influence the rating of any other.
non-compensatory assessment:  a technique used to assess alternatives in which tradeoffs in performance among criteria are not allowed; i.e. each alternative must exceed a minimum threshold level of performance to be considered.
ordinal ranking:  ordering items – alternatives, criteria, etc. – by preference or importance without reference to the magnitude, or degree, of preferences.  Items may be identified as first, second, third, and so on.
paralysis by analysis:  See analysis paralysis.
passive postponement:  undefined delay in decision or action caused by lack of focus or commitment to action.
performance matrix:  a table comparing, in absolute terms, the characteristics of alternatives; information presented may include quantitative measurements, qualitative assessments, and binary conditions (e.g. existence of a specific feature).  Also, consequence table.
preference ballot:  record of a voter’s ranked preferences of alternatives; may indicate ordinal or cardinal rankings.
preference profile:  voters’ ranked preferences, ordinal or cardinal.
probability:  the likelihood that an event or outcome will occur, often expressed in percentage terms, from 0% (impossible) to 100% (certain).
rank reversal:  a condition that arises, particularly in AHP, when adding an alternative to the analysis causes the order of preference of two other alternatives to reverse, or flip.
rational:  based on relevant information and objective analysis.
satisfice:  choose the first acceptable option encountered.
shadow decision:  a decision made, from an observer’s perspective, with little or no relevant information available or considered.
spoiler:  a candidate with little chance to win that performs well enough to diminish another’s chance to win; a candidate that splits the vote, causing an otherwise likely winner to lose.
utility:  a numerical representation of the desirability of an outcome.
voting method:  See voting system.
voting system:  set of rules that defines how votes will be cast, counted, transferred, etc.  Also, voting method.
voting technology:  aids to collecting and counting ballots and applying rules of the voting system.

Series Preview
     There are several topics under the umbrella of decision-making to be discussed.  A straightforward, sequential process will be described to provide a foundation of logic and consistency.  Various models of decision-making will be presented to reflect the differences in thought processes and circumstances for which they may be best suited.  Specific tools and techniques will be explained, allowing practitioners to quickly apply them to important decisions.  Techniques used to facilitate decision-making by groups, as well as individuals, will be explored.
     The range of topics related to effective decision-making is extensive; the broad categories mentioned above provide only a hint.  Thus, a variety of topics deemed relevant and useful will be covered, including decision-making techniques for both generic and specific, narrow applications.
     One option that will often not be expressed explicitly, but must always be considered, is to do nothing.  It is possible, in some circumstances, for all alternatives under consideration to be less desirable than the status quo.  Decision-makers must resist calls to “do something” until this option has been evaluated, no matter what method or tools are used to aid the decision.  The best action may, in fact, be inaction.
 
     Readers can submit questions about decision-making, suggest topics of particular interest for future installments, or provide other feedback in the comments section or on the Contact page.  JayWink is ready to help you and your organization perform by providing training, materials, or other guidance.  Don’t hesitate to reach out if we can be of assistance.
 
References
[Link] “Guidelines for applying multi-criteria analysis to the assessment of criteria and indicators.”
[Link] “Multi-criteria analysis:  a manual.”  Department for Communities and Local Government, London, January 2009.
[Link] Decision-Making:  Creativity, Judgment, and Systems.  Henry S. Brinkers, Ed.  Ohio State University Press, 1972.
 

Jody W. Phelps, MSc, PMP®, MBA
Principal Consultant
JayWink Solutions, LLC
jody@jaywink.com
 
Directory of “Making Decisions” entries on “The Third Degree.
Vol. I:  Introduction and Terminology (8Apr2020)
Vol. II:  The Rational Model (22Apr2020)
Vol. III:  Analytic Hierarchy Process (6May2020)
Vol. IV:  Fundamentals of Group Decision-Making (20May2020)

]]>
<![CDATA[Augmented Reality – Part 3:  Applications in the Service Sector]]>Wed, 25 Mar 2020 14:30:00 GMThttp://jaywinksolutions.com/blog/augmented-reality-part-3-applications-in-the-service-sector     Uses of augmented reality (AR) in various industries has been described in previous installments of “Augmented Reality” (Part 1, Part 2).  In this installment, we will explore AR applications aimed at improving customer experiences in service operations.  Whether creating new service options or improving delivery of existing services, AR has the potential to transform our interactions with service providers.
     Front-office operations are mostly transparent due to customer participation.  Customer presence is a key characteristic that differentiates services from the production of goods.  Thus, technologies employed in service industries are often highly visible.  This can be a blessing or a curse.
     Though some AR enhancements for end users are in development (see Part 2), customers who purchase manufactured goods typically see only the final product.  There is significant risk inherent in the use of augmented reality by service providers.  If the AR system is difficult to use or unflattering in any way, the customer relationship may be damaged.  If AR has supplanted person-to-person assistance, customers who do not favor the AR may feel especially alienated.
     Consider a similar, related situation:  automated telephone systems used in most customer service call centers often make it difficult, if not impossible, to speak to a person, even when none of the automated selections suit the customer’s needs.  Most people find this incredibly frustrating, even offensive, and seek other alternatives.  Technology that underperforms puts customer satisfaction at risk.  This should not deter the pursuit of technology, however.  Awareness of the criticality of thorough testing and open feedback loops is key.
 
Augmented Service
     The explosion of online shopping has threatened the relationships many retailers have worked to develop with customers.  Without face-to-face interactions, these relationships are more difficult to establish and maintain.  AR helps prevent customers from straying, allowing them to be “present” without visiting a physical location, if one even exists.  For example, augmented reality has given rise to the “virtual makeover.”
     A virtual makeover is achieved by uploading a photo of a customer to a retailer’s website or mobile app and choosing products to “try on.”  Instead of seeing photos of the latest fashions worn by models – or mannequins – customers can see themselves in the newest styles.  Likewise, eyeglasses, jewelry, or other accessories can be “tried on.”  A new hairstyle or shade of makeup can also be “tested” by augmenting a customer’s face with the chosen details.  A simulated image can then be provided to a stylist to reproduce the desired look.  The use of AR prevents customers from feeling rushed or self-conscious, which they may feel in a salon or clothing store, if they find it difficult to decide on a new look.  What begins as an attempt to sustain a weakening customer connection may actually strengthen it.
 
     Customers can also subject their homes to a similar experience with a “virtual remodel.”  Color and fabric samples or in-store displays will never provide the whole story.  Now, paint, carpet, and furniture – even an entire kitchen – can be projected into the actual living space.  Caveat emptor:  AR can show you how a new couch will look in your living room, but it cannot tell you if it is comfortable or well-made; it may be wise to plan a trip to a showroom before making your final decision to purchase.
 
     Several other applications of AR have been touched on in the first two parts of “Augmented Reality” where the lines between the service sector and other industries are not well-defined.  For example, AR-generated product descriptions and operating or installation instructions blur the lines between manufacturers, retailers, and installers.       
     Tours of historical sites or unfamiliar cities have been discussed, directly and indirectly, throughout the series of posts on digital tools.  The final enhancement to tours that we will discuss is self-guiding capability.  Traditional tour groups are treated like herds, always kept in a tight group, so no one gets lost; there is little opportunity to indulge personal preferences.  Adding directions along the tour path to the visual field allows visitors to pace their own tour, dependent only on their own interests.  If a visitor is separated from the herd, the AR tour guide will display the path to follow, ensuring that all points of interest are visited without getting lost.
 
     As mentioned in Part 2, equipment maintenance can be considered a service performed within manufacturing operations.  In other contexts, this is usually called “field service;” hence its inclusion here.  The uses of AR in field service can be numerous; it is dependent on the specific equipment and types of work to be performed.  Simple examples include presenting the locations of all fasteners to ensure secure installation of a component, identifying all points of adjustment available to optimize equipment performance, and identifying common failure points, such as leaky connections, to accelerate basic troubleshooting.
     A particularly powerful application of AR in field service is the ability to “port in” expertise as needed.  If a technician identifies a problem that is beyond his/her expertise, the AR image can be displayed on a remote monitor.  Combined with voice communication, a remote “expert” can guide the on-site technician through additional troubleshooting or repairs.  Providing remote assistance minimizes travel requirements, making these “experts” more widely available to support multiple projects simultaneously.
 
     The use of digital tools in training has become a recurring theme in this series of posts.  Training with AR is somewhat unique in the service sector, however, as both customers and providers can benefit from it.  Service providers use AR-assisted training, in similar fashion to manufacturers, to teach operators to properly perform a task or process.  Service providers can also use AR to guide customers through a service experience or enable future self-service.  If customers engage the system infrequently or it is easy to make mistakes, they may use the AR each and every time.  Either way, it lightens the load on front-office personnel, allowing them to focus on value-added activities.
 
Advantages of AR in Service Operations
     We will, again, summarize the preceding discussion by focusing on the benefits to be gained by service providers by employing augmented reality.  The advantages are as diverse as the services that may be augmented; they include:
  • Increased customer engagement with online retailers.
  • Customer self-sufficiency; fewer attendants are required to assist customers.
  • Reduced return rates for retailers, as customers are more confident in their purchasing decisions and, therefore, more committed to them.
  • Remote support reduces travel for field-service technicians and downtime for equipment operators.
  • Remote support capability expands the ability of customer service representatives to assist buyers of consumer goods.
  • Minimized need for retailers to maintain physical (“brick-and-mortar”) stores.
  • Increased customer satisfaction resulting from greater accessibility of information.
  • Service experiences are more interactive and more customized; customers feel less like something is being done to them and more like something is being done for them.
 
     The advantages outlined above are, necessarily, somewhat generic.  The unique and personal nature of services creates incredible potential for both service providers and customers to benefit from AR or other enhancements.  The specific benefits, however, are difficult to project from a few examples to vastly different services.  Defining and creating these benefits depends on creativity and insight into the service process and customer base.
 
     As you look around your service business in search of improvement opportunities, feel free to contact JayWink Solutions for assistance or guidance.  We can discuss various methods of augmenting your organization’s performance.
 
References
[Link] “Augmented Reality in Healthcare.”  Jasmine Sanchez, Plug and Play.
[Link] “Augmented Reality In Healthcare Will Be Revolutionary: 9 Examples.”  The Medical Futurist, November 14, 2019.
[Link] “Can Augmented Reality Improve Manufacturing?”  American Machinist, November 21, 2019.
[Link] “Simulating Reality to Fix Mistakes, Improve Production.”  American Machinist, November 21, 2019.
[Link] “How Augmented Reality Will Change Manufacturing.”  Tower Fasteners.
[Link] “Augmented Reality and the Smart Factory.”  Manufacturing.net, April 12, 2019.
[Link] “How Augmented Reality Will Disrupt The Manufacturing Industry.”  ThomasNet, January 9, 2019.
[Link] “Augmented Reality and Manufacturing.”  Machine Design, September 23, 2019.
[Link] “7 Ways Augmented Reality in Manufacturing Will Revolutionize The Industry.”  Cerasis.
[Link] “What Can Augmented Reality Do for Manufacturing?”  Engineering.com, May 11, 2017.
[Link] “Real world applications of Augmented Reality (AR) in manufacturing.”  Manufacturing Lounge.
[Link] “Why augmented reality could be a dream solution for manufacturers.”  Essentra Components, August 28, 2018.
[Link] “5 Ways AR Will Change the Reality of Manufacturing.”  Design News, September 11, 2018.
[Link] “9 Ways Augmented Reality Customer Experience Boosts Sales and Satisfaction.”  TechSee, January 18, 2019.
[Link] “3 Ways Augmented Reality is Taking Customer Experience to the Next Level.”  Entrepreneur, July 11, 2019.
[Link] “Augmented Reality Study Shows Big Business Impact in Customer Experience.”  SmarterCX, August 20, 2019.
[Link] “How to Use AR (Augmented Reality) to Improve the Customer Experience.”  HubSpot.
[Link] “Augmented Reality for Manufacturing:  Bringing Digital Transformation to Skilled Workers.”  ARC Advisory Group, December 2018.
[Link] “Manufacturing Lessons from Space.” Quality Digest, June 19, 2017.
[Link] “The Total Economic Impact of PTC Vuforia.”  Forrester, July 2019.
[Link] Caudell, Thomas & Mizell, David. (1992). Augmented reality: An application of heads-up display technology to manual manufacturing processes. Proceedings of the Twenty-Fifth Hawaii International Conference on System Sciences. 2. 659 - 669 vol.2.
[Link] “AR/VR Reinvent Workforce Training.”  Plant Services, September 2019.
[Link] “How Augmented Reality Can Modernize Aerospace And Defense Manufacturing.”  Aerospace and Defense Technology, December 2019.

 
Jody W. Phelps, MSc, PMP®, MBA
Principal Consultant
JayWink Solutions, LLC
jody@jaywink.com
]]>
<![CDATA[Augmented Reality – Part 2:  Manufacturing Industry Applications]]>Wed, 11 Mar 2020 14:30:00 GMThttp://jaywinksolutions.com/blog/augmented-reality-part-2-manufacturing-industry-applications     Some of the augmented reality (AR) applications most likely to attract popular attention were presented in “Part 1:  An Introduction to the Technology.”  When employed by manufacturing companies, AR is less likely to be experienced directly by the masses, but may have a greater impact on their lives.  There may be a shift, however, as AR applications pervade product development and end-user activities.
     In this installment, we look at AR applications in manufacturing industries that improve operations, including product development, quality control, and maintenance.  Some are involved directly in the transformation of materials to end products, while others fill supporting roles.  The potential impact on customer satisfaction that AR use provides will also be explored.
Manufacturing with Augmented Reality
     Manufacturing a product requires a series of activities.  Before physical processing can begin, the object to be produced must be fully defined.  The use of a digital twin – a CAD model – for this purpose was discussed in “Double Vision.”  Augmented reality goes a step further by creating a coincident image of digital and physical twins to evaluate revisions.  For example, a vehicle’s aesthetic redesign (“facelift”) data can be projected onto an existing vehicle to conduct a realistic visual analysis.
     Other products can also be placed in their end-use environment by use of AR.  Evaluations of aesthetics, scale, accessibility, or other characteristics can be performed to ensure that design requirements are met.
     The finalized product design is released to production, where AR can be employed in various process-monitoring tasks.  Assembly operations are particularly amenable to augmented reality.  The point of insertion and orientation of each component to be assembled can be presented in succession, reinforcing the required assembly sequence.  Additional data, such as the required torque of threaded fasteners, can be added to the display at the moment it is needed.
     Small-scale assembly operations – those in which all activities take place in a single location – can be easily mapped; all items in the visual field will be recognized by the AR system.  Large-scale operations, such as aircraft assembly, require assemblers to move significant distances to complete tasks.  This type of operation requires additional technology for the AR application to be fully functional.  A positioning system must also be employed to ensure that the operator is performing the defined set of tasks on the correct components.  To understand how this works, think of GPS scaled down to operate inside a single, though expansive, facility.  The expanded dimensions of these operations heightens the risk of error.  Implementing a position-enabled AR application reduces the risk to a level comparable to similarly-equipped small-scale operations.
 
     An alternate form of augmented reality is also used for specialized applications.  Typical applications employ a headset, smartphone, or other display device to overlay digital information on physical objects in the operator’s field of view.  This alternate form uses a light source, usually lasers, to project information directly onto physical objects.  Carbon fiber layups and wiring harness fabrication are representative examples.
     Maintaining consistent quality can also be facilitated by AR.  Supplier performance can be verified with augmented incoming inspection.  The “no fault forward” philosophy is supported by in-process verifications; AR can be used to ensure that components are present in the correct quantity, location, and orientation before allowing an assembly to be passed to the next operation.  Final inspection with AR allows an inspector to confirm all prior inspections have been completed and logged while checking for proper labelling, packaging, or other details required to ship the product.
     The value of AR to quality assurance can be expressed with Juran’s popular dictum “100% inspection is only 80% effective.”  The consistency of an AR application prevents boredom, complacency, or distractions from causing a quality “spill.”
 
     Manufacturers rely on a variety of equipment to remain productive.  Therefore, proper maintenance and efficient repairs are critical to their success.  Commonly regarded as a service function within manufacturing operations, maintenance and repair will be discussed further in Part 3 of “Augmented Reality.”
 
     At the end of the chain of events sits the product user.  Feature descriptions, operating instructions, and basic troubleshooting information can be provided in an AR environment.  AR applications for end users can be used to target prospective buyers with marketing messages, coordinate purchase and delivery, enhance ownership, and create a feedback loop to inform designers of desired product improvements.
     As mentioned in Part 1, training with AR spans all contexts discussed.  In fact, the manufacturing applications mentioned thus far could be used for training in addition to performing the actual task for which they are designed.  Practicing unfamiliar activities with AR allows a person to quickly gain familiarity with the physical objects to be manipulated, the digital information to be presented, and the unique environment that the combination creates.  Doing so offline allows this to happen without interfering with production; productivity and quality are, therefore, unaffected.
     New employee onboarding can be improved with AR applications that provide critical information in a format that is more easily accessible and interactive than the typical handbook.  Great value can be derived from an AR application that helps a new employee locate safety equipment, such as first-aid and eye-wash stations, and learn safety procedures such as emergency evacuation.  Similarly, AR can be used to show an employee where to retrieve supplies necessary for their job.  AR applications used for these purposes and others are likely to be far more memorable, easier to access for refreshers, and more interesting than traditional materials.  These favorable characteristics engage employees more fully, improving learning curves, information retention, and job satisfaction.
 
Launching AR in Manufacturing
     While AR is broadly applicable to manufacturing operations of all types, organizations should begin by looking for tasks with characteristics that make them especially attractive for AR implementation.  In this section, the types of operations that are most likely to yield a quick payback and generate enthusiasm for AR within a manufacturing company are described.
     Dull, repetitive tasks are often subject to higher error rates than would be expected based on their level of complexity alone.  As boredom sets in, the resulting inattention leads to errors or omissions.  AR could prevent such mistakes by guiding operators through each cycle, verifying correct completion for every part produced.
     Tasks performed infrequently can be affected by operator’s “memory leaks” – information that fades from memory because of disuse.  In this situation, AR serves to remind operators of details that could be forgotten or overlooked.  The operator is already proficient in task performance; the AR ensures quality through task accuracy and completeness.
     Long processes with many steps that may be difficult to memorize could benefit greatly from AR.  The traditional response to this scenario is to split the process into groups of tasks to be performed by multiple operators in succession.  Each operator then rotates through assignments performing each set of tasks.  However, not every process or operation is amenable to equitable division.  Effectively splitting the workload and efficiently completing a process with job rotations is, itself, a recurring process.  Changes to the product and process must be evaluated and work redistributed as necessary throughout the life of both product and process.
     Job rotations are also implemented to prevent repetitive stress disorders and mistakes due to inattention, discussed earlier.  Equipping operators with AR could eliminate the need for task-splitting and rotations.  Each operator performs a wider range of activities, reducing repetitive stress and boredom.  Advantages discussed above – higher engagement and cyclic reminders – also prevent memory leaks from negatively impacting productivity or quality.  Responsibility for a larger set of tasks increases operators’ ownership of results and improves job satisfaction.  It also becomes easier to monitor individual performance to identify candidates for special recognition, development opportunities, or those with advancement potential.
     High-risk operations are prime candidates for AR implementation.  If employee safety can be enhanced, it should be given serious consideration.  Other risks worth consideration include process failure, equipment damage, quality spill, environmental impact, and competitive threats.
     Focusing on operations with these characteristics will give your AR launch a greater chance of success and acceptance.  Deployment of AR technology can be expanded to less-obvious applications, applying experience gained from the initial installations, as support for the effort strengthens.
 
     Entertainment applications of AR, as mentioned in Part 1, often direct users’ focus to the augmentation, using the real world as merely a canvas or backdrop.  In contrast, commercial applications employ augmented reality to maintain focus on important elements of the user’s surroundings or the process in which the user is engaged.  In this case, the risks are highlighted to ensure that they are properly addressed.  This simply means that the intended purpose and environment must be kept front-of-mind throughout development to ensure that AR applications meet the objectives defined by the adopting organization.  Loss of focus on these objectives may lead to loss of support for an AR initiative, a setback that could damage improvement efforts for years.
     Service industries combine public exposure and process efficiency objectives, a unique environment that creates extensive potential for AR implementation.  In “Part 3:  Applications in the Service Sector”, we will explore how these special characteristics generate benefits from augmented reality.  Look for it soon on “The Third Degree.”
 
            If you would like assistance evaluating potential AR applications or implementing an appropriate solution in your manufacturing facility, contact JayWink Solutions to schedule an initial consultation.
 
References
[Link] “Augmented Reality in Healthcare.”  Jasmine Sanchez, Plug and Play.
[Link] “Augmented Reality In Healthcare Will Be Revolutionary: 9 Examples.”  The Medical Futurist, November 14, 2019.
[Link] “Can Augmented Reality Improve Manufacturing?”  American Machinist, November 21, 2019.
[Link] “Simulating Reality to Fix Mistakes, Improve Production.”  American Machinist, November 21, 2019.
[Link] “How Augmented Reality Will Change Manufacturing.”  Tower Fasteners.
[Link] “Augmented Reality and the Smart Factory.”  Manufacturing.net, April 12, 2019.
[Link] “How Augmented Reality Will Disrupt The Manufacturing Industry.”  ThomasNet, January 9, 2019.
[Link] “Augmented Reality and Manufacturing.”  Machine Design, September 23, 2019.
[Link] “7 Ways Augmented Reality in Manufacturing Will Revolutionize The Industry.”  Cerasis.
[Link] “What Can Augmented Reality Do for Manufacturing?”  Engineering.com, May 11, 2017.
[Link] “Real world applications of Augmented Reality (AR) in manufacturing.”  Manufacturing Lounge.
[Link] “Why augmented reality could be a dream solution for manufacturers.”  Essentra Components, August 28, 2018.
[Link] “5 Ways AR Will Change the Reality of Manufacturing.”  Design News, September 11, 2018.
[Link] “9 Ways Augmented Reality Customer Experience Boosts Sales and Satisfaction.”  TechSee, January 18, 2019.
[Link] “3 Ways Augmented Reality is Taking Customer Experience to the Next Level.”  Entrepreneur, July 11, 2019.
[Link] “Augmented Reality Study Shows Big Business Impact in Customer Experience.”  SmarterCX, August 20, 2019.
[Link] “How to Use AR (Augmented Reality) to Improve the Customer Experience.”  HubSpot.
[Link] “Augmented Reality for Manufacturing:  Bringing Digital Transformation to Skilled Workers.”  ARC Advisory Group, December 2018.
[Link] “Manufacturing Lessons from Space.” Quality Digest, June 19, 2017.
[Link] “The Total Economic Impact of PTC Vuforia.”  Forrester, July 2019.
[Link] Caudell, Thomas & Mizell, David. (1992). Augmented reality: An application of heads-up display technology to manual manufacturing processes. Proceedings of the Twenty-Fifth Hawaii International Conference on System Sciences. 2. 659 - 669 vol.2.
[Link] “AR/VR Reinvent Workforce Training.”  Plant Services, September 2019.
[Link] “How Augmented Reality Can Modernize Aerospace And Defense Manufacturing.”  Aerospace and Defense Technology, December 2019.

 
Jody W. Phelps, MSc, PMP®, MBA
Principal Consultant
JayWink Solutions, LLC
jody@jaywink.com
]]>
<![CDATA[Augmented Reality – Part 1:  An Introduction to the Technology]]>Wed, 26 Feb 2020 15:30:00 GMThttp://jaywinksolutions.com/blog/augmented-reality-part-1-an-introduction-to-the-technology     When we see or hear a reference to advanced technologies, many of us think of modern machinery used to perform physical processes, often without human intervention.  CNC machining centers, robotic work cells, automated logistics systems, drones, and autonomous vehicles often eclipse other technologies in our visions.  Digital tools are often overlooked simply because many of us find it difficult to visualize their use in the physical environments we regularly inhabit.
     There is an entire class of digital tools that is rising in prominence, yet currently receives little attention in mainstream discourse:  augmented reality (AR).  There are valid applications of AR in varied industries.  Increased awareness and understanding of these applications and the potential they possess for improving safety, quality, and productivity will help organizations identify opportunities to take the next step in digital transformation, building on predecessor technologies such as digital twins and virtual reality.
Popular Applications of AR
     While many of us remain unaware of the range of applications for augmented reality, most have encountered – even engaged with – AR.  When the entertainment industry embraces a technology, it often becomes highly visible; such is the case with AR.
     A recent, highly-publicized example is the Pokemon GO mobile game that became a national sensation in 2016.  It adds fictional characters to a player’s real-world surroundings to be sought out and “collected.”  Critics of the game bemoaned user’s focus on the augment – the Pokemon characters – to the exclusion of reality – the danger in their inattention, lack of courtesy for others, trespassing restrictions, and so on.  Fans of the game simply played on without consideration of the background technology that made it possible.
     Another smartphone-centric entertainment application of AR is the augmented selfie.  Users can add a variety of digital enhancements to their “self-portraits,” including an animal’s ears, nose, and whiskers.  “To each his own.”
 
     An earlier example of AR in entertainment is Terminator Vision.  As The Terminator searches for future resistance leader John Connor, information about his surroundings appear directly in his visual field.  Although the AR is simulated, it is a prescient representation of the capabilities of AR technology.  Red tint not required.
     Adding an educational element to the entertainment applications, museums and historical sites can complement their tours with AR.  Viewing a sculpture, for example, could prompt display of the sculptor’s biography or suggested additional works for the visitor to view.  The site of ancient ruins could be viewed in its original form using AR, providing before (augmented) and after (unaltered) images of a scene.  A site destroyed in battle, for example, may be viewed, linking the before and after images with a recreation of critical events.
     Moving to a purely educational application, students can use AR to proceed through lessons at their own pace.  Images, text, and audio can be integrated in an educational program to cater to different learning styles.  The independent nature of study ensures that no student is rushed through material with which they struggle, nor bored and disengaged by lingering on material they have mastered.
     Teaching with AR is particularly useful in manufacturing and service industries.  Employee onboarding and upskilling can be done efficiently with the aid of AR applications.  Training with AR will be discussed further in future installments of “Augmented Reality” on “The Third Degree.”
 
     Applications of much greater consequence are also in development.  Military forces can use AR to aid navigation through dangerous environments, such as unfamiliar structures (buildings) where enemy combatants are believed to be, or previously mapped minefields.  Any time soldiers must operate with poor visibility – dark of night, sandstorm, fog, etc. – AR could provide information beyond what can be seen with night-vision technology.
     Medical care is also improving with the assistance of AR.  Presentation of electronic medical records in a doctor’s visual field provides immediate access without attention straying from the patient.  Blood vessel mapping identifies appropriate vessels for IV insertions and blood draws; accuracy is improved, resulting in less pain and discomfort.  MRI and CT scan overlays viewed during a procedure improve the accuracy and efficiency of surgeons.  Patients can also be educated about medical procedures or disease progression by projecting information onto a patient’s own body.  Such a gripping presentation – more concrete, less abstract than any other form – has the potential to improve patient decision quality and, therefore, healthcare outcomes.
 
Predecessor Technologies
     Augmented reality technology is built on a foundation of several digital tools.  Information processing and computation (computers) undergird and make possible development of all related tools.  Digital twins could not be created, and simulation could not be performed efficiently without computers.  These form the basis for virtual reality.  Vision systems have transitioned from analog to digital cameras, allowing visual information to be processed like any other data.  Augmented reality integrates capabilities from each of these technologies to create a composite “best-of-both-worlds” experience.  How these building blocks are used to construct an AR application is summarized in the diagram below.
     As awareness of augmented reality spreads, new applications will continue to appear.  I am confident that creative developers will find new applications, in new industries, with increasing capability, far into the future.  Corresponding rates of adoption and development are likely to accelerate, following the pattern of many previous technologies.  This technology is not just an interesting element of entertainment, but a highly valuable tool for many practical applications.
     Forthcoming installments of “Augmented Reality” will discuss practical applications in the contexts typically explored in this forum.  Be sure to come back to “The Third Degree” for Part 2:  Manufacturing Industry Applications and Part 3:  Applications in the Service Sector.
 
     For further discussion of advanced technologies and how they can be leveraged to improve operations, feel free to contact JayWink Solutions.  We would enjoy helping you augment the reality of your organization’s situation with improved financial and operational performance.
 
References
[Link] “Augmented Reality in Healthcare.”  Jasmine Sanchez, Plug and Play.
[Link] “Augmented Reality In Healthcare Will Be Revolutionary: 9 Examples.”  The Medical Futurist, November 14, 2019.
[Link] “Can Augmented Reality Improve Manufacturing?”  American Machinist, November 21, 2019.
[Link] “Simulating Reality to Fix Mistakes, Improve Production.”  American Machinist, November 21, 2019.
[Link] “How Augmented Reality Will Change Manufacturing.”  Tower Fasteners.
[Link] “Augmented Reality and the Smart Factory.”  Manufacturing.net, April 12, 2019.
[Link] “How Augmented Reality Will Disrupt The Manufacturing Industry.”  ThomasNet, January 9, 2019.
[Link] “Augmented Reality and Manufacturing.”  Machine Design, September 23, 2019.
[Link] “7 Ways Augmented Reality in Manufacturing Will Revolutionize The Industry.”  Cerasis.
[Link] “What Can Augmented Reality Do for Manufacturing?”  Engineering.com, May 11, 2017.
[Link] “Real world applications of Augmented Reality (AR) in manufacturing.”  Manufacturing Lounge.
[Link] “Why augmented reality could be a dream solution for manufacturers.”  Essentra Components, August 28, 2018.
[Link] “5 Ways AR Will Change the Reality of Manufacturing.”  Design News, September 11, 2018.
[Link] “9 Ways Augmented Reality Customer Experience Boosts Sales and Satisfaction.”  TechSee, January 18, 2019.
[Link] “3 Ways Augmented Reality is Taking Customer Experience to the Next Level.”  Entrepreneur, July 11, 2019.
[Link] “Augmented Reality Study Shows Big Business Impact in Customer Experience.”  SmarterCX, August 20, 2019.
[Link] “How to Use AR (Augmented Reality) to Improve the Customer Experience.”  HubSpot.
[Link] “Augmented Reality for Manufacturing:  Bringing Digital Transformation to Skilled Workers.”  ARC Advisory Group, December 2018.
[Link] “Manufacturing Lessons from Space.” Quality Digest, June 19, 2017.
[Link] “The Total Economic Impact of PTC Vuforia.”  Forrester, July 2019.
[Link] Caudell, Thomas & Mizell, David. (1992). Augmented reality: An application of heads-up display technology to manual manufacturing processes. Proceedings of the Twenty-Fifth Hawaii International Conference on System Sciences. 2. 659 - 669 vol.2.
[Link] “AR/VR Reinvent Workforce Training.”  Plant Services, September 2019.
[Link] “How Augmented Reality Can Modernize Aerospace And Defense Manufacturing.”  Aerospace and Defense Technology, December 2019.

 
Jody W. Phelps, MSc, PMP®, MBA
Principal Consultant
JayWink Solutions, LLC
jody@jaywink.com
]]>
<![CDATA[Substitute Your Own (Virtual) Reality]]>Wed, 12 Feb 2020 15:30:00 GMThttp://jaywinksolutions.com/blog/substitute-your-own-virtual-reality     The use of digital technologies in commercial applications is continually expanding.  Improvements in virtual reality (VR) systems have increased the practical range of opportunities for their use across varied industries.
     As discussed with respect to other technologies experiencing accelerated development and expansion, several definitions of “virtual reality” may be encountered.  Researchers and practitioners may disagree on which applications qualify for use of the term.  For our purposes, we will use a simple description of virtual reality:
“Virtual reality” is an experience created, using a digital twin or other model, where
  • the user has a first-person perspective (i.e. immersion),
  • sensory input is limited to the “reality” created (i.e. sensory deprivation and replacement),
  • physical phenomena are added to enhance realism (e.g. motion, acceleration, heat), and
  • the user can interact with the simulated environment.
     The complexity and accuracy of VR experiences varies according to its intended purpose and state of development.  The examples presented below cover a wide range of applications and, therefore, vastly different levels of sophistication.
     Some of the most advanced virtual reality experiences can be found in the entertainment industry.  Potentially huge audiences, and the resultant high returns, justify the investment in photorealistic rendering capabilities that provide more “authentic” virtual environments.  High expectations of these audiences also require it.
     An immersive adventure game is the quintessential example of virtual reality in entertainment.  3D movies represent a step toward virtuality, but lack the interactivity that defines a VR experience.
     Real estate brokers may also use virtual reality to bolster sales.  However, it is likely to be used only at the “high end” of the market.  A virtual tour of a multimillion-dollar mansion may entice a foreign investor to act swiftly, but a family purchasing their first home is unlikely to do so without first visiting the neighborhood and reviewing the property.  Also, the commission paid on the starter home will not justify the investment required to create the VR experience.
     Products and services also can be demonstrated via virtual reality.  This, too, however, is uncommon for the vast majority of goods and services in the marketplace.  Commodity offerings simply cannot support the development of VR experiences, nor are they likely to be highly influential in purchasing decisions.  Relying on retailers to effectively demonstrate products and educate consumers may also subject manufacturers to additional liabilities, further limiting its application.
     Creating a scarcely-used VR experience is often cost-prohibitive, as in the real estate example above.  One application for which it can provide great value, however, is process simulation.  While the VR may only be used a small number of times to evaluate safety, ergonomics, and productivity, the process that it simulates will be performed many, many times.  Some automotive manufacturers were early adopters of VR for evaluations of assembly lines to verify feasibility of tasks and reduce ergonomic and safety risks.
     Other early adopters include high-risk processing facilities, such as chemical plants and nuclear power-generation facilities.  In addition to process control and emergency response training, safety inspection training can also be facilitated by VR.  Inspectors can learn to identify fault conditions without the added risk of accidents or exposure created by their presence within the processing facility.
 
     There are several unconventional VR developments that demonstrate the broad range of application possible outside those we typically associate with the technology.
     First responders can safely train for potential emergency scenarios or review past situations.  Active-shooter drills can be conducted efficiently, as frequently as necessary, by eliminating the need to coordinate a large number of actors, secure a facility, etc.  Eliminating the actors also eliminates the risk of sustaining real injuries during a simulated event.
     Crime scene investigation can also be facilitated by digitizing the entire scene upon first entry.  Doing so allows detectives to “return” to the scene, via VR, as often as necessary without travel.  It may also allow more investigators to review the scene, minimizing overlooked clues.  A VR experience will not contaminate a crime scene, but entry by investigators will.
     Similarly, firefighters can be trained in methods used in explosive environments, for example, without creating an actual explosion risk.  Arson investigations can be facilitated in similar fashion to other crime scene investigations.  Also, a burned-out structure may not be sound; minimizing visits by investigators reduces risk of injury due to collapse or similar mishap.
     Areas struck by natural disasters can benefit from VR in many of the same ways as crime scenes.  Exploration via VR, to identify potential search and rescue sites, for example, reduces the risk of additional injury prior to commencing operations.  News outlets can also use the digitized scene to report on the incident without the need to access potentially dangerous areas with additional personnel and equipment.
     Advanced healthcare may be the most valuable application of VR to be developed.  Applications have been developed to teach amputees to control phantom limb pain, reducing the need for medication and overall suffering.  Mental health improvements are also being achieved with the help of VR-assisted cognitive behavioral therapy to treat phobias and social anxieties.  The range of conditions that can be treated with the assistance of VR is expected to expand, further improving the quality of healthcare.
     Virtual reality applications often focus on visual elements of the created environment.  Other senses can also be stimulated, however, if the application warrants.  Sounds can be easily added to the VR environment, while others require more sophisticated equipment and programming.  For example, artificial skins are in development that enable simulation of bodily contact (i.e. pressure), vibration, heat, or other tactile feedback.  Simulators used for entertainment or training purposes often add several actuators to an enclosed environment to simulate motions or accelerations such as gravity.  If the motions and accelerations are not well-coordinated with the visual cues, however, intense motion sickness may result.
 
     In the past, developers were capable only of operating in the real world with physical objects.  Prototyping and physical testing were common, iterative activities in product development.  Training was often conducted using passive techniques, such as written documents or video presentations.  Risks of injury or other mishap were often identified during system construction and, many times, only after an injury or accident occurred.
     With the advent of virtual reality, fewer prototypes and physical tests are needed to develop products.  Training programs are much more interactive, increasing their effectiveness.  Many injuries and other negative consequences are avoided by proactively mitigating or eliminating risks with the help of VR analyses.
     The next step in the digital evolution is to combine the best of both worlds – the physical and the digital – to broaden the application and increase the value of digital tools in an array of industries.  This next step is called “augmented reality” and will be explored in future installments of “The Third Degree.”
[See “Augmented Reality – Part 1:  An Introduction to the Technology;” Part 2:  “Manufacturing Industry Applications;” Part 3:  “Applications in the Service Sector”]
 
     Feel free to contact JayWink Solutions for further discussion of virtual reality and how it, or other digital tools, could benefit your organization.
 
References
[Link] “The Future of Virtual Reality (VR) in Manufacturing.”  East West, April 12, 2016.
[Link] “Virtual reality comes of age in manufacturing.”  ComputerWeekly, February 16, 2015.
[Link] “Virtual Reality for Workplace Safety in the Industrial & Manufacturing Industry.” Centric Digital, October 13, 2016.
[Link] “10 industries rushing to embrace virtual reality.”  CNBC, December 1, 2016.
[Link] “9 industries using virtual reality.”  TechRepublic, March 10, 2015.
[Link] “Artificial Skin Provides Haptic Feedback.”  Tech Briefs, December 2019.

 
Jody W. Phelps, MSc, PMP®, MBA
Principal Consultant
JayWink Solutions, LLC
jody@jaywink.com
]]>
<![CDATA[Double Vision:  Digital Twins Materialize Operational Improvements]]>Wed, 29 Jan 2020 15:30:00 GMThttp://jaywinksolutions.com/blog/double-vision-digital-twins-materialize-operational-improvements     Digital Twin technology existed long before this term came into common use.  Over time, existing technology has advanced, new applications and research initiatives have surfaced, and related technologies have been developed.  This lack of centralized “ownership” of the term or technology has led to the proliferation of differing definitions of “digital twin.”
     Some definitions focus on a specific application or technology – that developed by those offering the definition – presumably to coopt the term for their own purposes.  Arguably, the most useful definition, however, is the broadest – one that encompasses the range of relevant technologies and applications, capturing their corresponding value to the field.  To this end, I offer the following definition of digital twin:
     An electronic representation of a physical entity – product, machine, process, system, or facility – that aids understanding of the entity’s design, operation, capabilities, or condition.
Life Cycle of Twins
     The life cycle of a digital twin can be described by three phases of application, encompassing seven steps of progression.  The three phases are:
  • Design and Analysis,
  • Fabrication and Testing, and
  • Operation and Maintenance.
The life cycle of a digital twin is summarized in the diagram below.
     In the design and analysis phase, only the digital twin exists.  Tests and design modifications are conducted virtually.  In the fabrication and testing phase, physical entities are created and subjected to real-world tests.  Generalized tests are conducted to confirm strength of materials, corrosion resistance, aging (i.e. weathering), or other physical properties.  Verification and validation tests are conducted to evaluate the entity’s performance in the range of operating parameters specified for its service environment.  The operation and maintenance phase begins when the entity is placed in service.  Operating conditions are monitored to ensure that all parameters remain in their specified ranges or alert users of conditions that have not been successfully controlled.  Data collected from the entity in service is used to determine its maintenance needs.  Maintenance activity can then be scheduled to minimize the impact on the entity’s performance and availability.
     As a digital twin progresses through the phases, it becomes increasingly sophisticated.  Typically, the basis of a digital twin is a 3D CAD solid model; for some process applications, a 2D schematic sufficiently represents the entity to be monitored.  Sensor data is then layered on the model in text, charts, color-mapping, or other visual display format.  A fully-developed physical-digital twin pair will use two-way communication to maintain a constant state of synchroneity.  The digital twin becomes more than a mere reporter of physical twin status; it becomes a remote control.  Proximity to the entity is no longer a requirement for its efficient operation.  However, it is not practical for every physical-digital twin pair to reach this level of development; some applications simply do not warrant it.
 
Example Physical – Digital Twin Pairs
     To better comprehend how digital twins are used to aid understanding of their physical counterparts, we will briefly discuss several examples.  Though some may be unfamiliar, it is likely that you encountered others in your daily life; perhaps you hadn’t thought of them as twins before.  If that is the case, the following examples may help you recognize more digital twins when you encounter them or identify potential applications of your own.
     A CAD model or schematic, as mentioned previously, is often the foundation of a digital twin.  The model can serve many purposes, including:
  • aesthetic evaluation,
  • stress, thermal, aerodynamic, or other physical performance analysis,
  • dynamic simulation, such as mechanical motions and interference detection,
  • manufacturing process development, such as cutting tool path generation (CAM),
  • process simulation, such as capacity verification or queuing, and
  • demonstration and marketing materials.
Using the digital twin for analysis and process development minimizes the time and cost to bring a product to market or place an asset in service.  When a model can be used for virtual demonstrations, transport and setup of physical hardware is eliminated, increasing flexibility and reducing cost.
     Heat treat and sintering furnaces can be monitored via digital twins to ensure the quality of processed material.  Each zone of a belt-type furnace can be monitored in real time for temperature, humidity, and gas concentrations.  In batch furnaces, these parameters are monitored across time to engender the required material properties.  Without a digital twin, these processes require manual monitoring and adjustment in the heat treat environment, which is often inhospitable.  It also limits the number of processes that an individual can effectively monitor and control.
     Condition-based maintenance of equipment is facilitated by use of digital twins.  Temperatures, vibrations, fluid levels, current draw, and various other parameters can be continuously monitored.  This data allows technicians to evaluate a machine’s performance, track trends, and proactively service the machine to avert catastrophic failures.
     Racecar telemetry allows a crew chief to monitor a digital twin of the car while it is on track.  Tire pressures, various engine parameters, brake temperatures, kinetic energy recovery efficiency of a hybrid powertrain, fuel burn rate, and aerodynamic downforce, among other data, can be tracked throughout on-track sessions.  The data collected helps the team diagnose issues, plan pit stops, and optimize performance.  Endurance racing teams can also infer from the data when a driver has become fatigued and should be relieved.
     Instead of transmitting data to a remote monitor, the Driver Information Center housed in the dashboard of most modern cars presents the digital twin directly to the driver.  Tire pressures, fuel economy, condition of motor oil, and powertrain configuration details are available at the touch of a button.  The system may also warn the driver of an open door, blown bulb, or other urgent situation.
     A flight simulator is a digital twin of an aircraft used to predict performance in a wide range of conditions – normal and emergency.  It is also used to train pilots how to safely operate the aircraft in these conditions.  A simulator incorporates all of the aircraft’s flight dynamics information, including aerodynamic characteristics, engine performance curves, control surface parameters, and system interdependencies to create realistic flight scenarios.  Though this digital twin does not include two-way communication and control of a physical twin, it is, nonetheless, very sophisticated, mirroring the incredible complexity of aerosystems.
     Forward-thinking urban planners are also beginning to use digital twin technology.  Modelling an entire city – a monumental task – allows proposed developments to be more thoroughly analyzed than has ever been possible before.  Infrastructure capacity – roads, bridges, power grids, water and sewer systems – can be more accurately assessed prior to project approval, minimizing unexpected service interruptions or system overloads.  The relationship between construction projects and climatic factors can also be studied in advance.  Contractors can better prepare for the impacts that weather patterns may have on construction.  Conversely, the effects of completed projects on sun exposure, runoff, and wind can be predicted prior to construction.
     Other examples include “smart home” technology (remote control of residential lighting, HVAC, security system, etc.), power-generation (nuclear, solar, wind) facility control rooms, chemical process industries (oil refinery, brewery), and testing of electrical load capacity and system redundancy in aircraft or other safety-critical systems.  Even healthcare is being impacted by the digital phenomenon.  As medical technology advances, sensors and scans are capable of creating ever-more accurate models of patients, allowing doctors to analyze the impacts of treatment choices on various systems in the human body.
 
Advantages of “Twinning”
     To summarize the discussion, we will focus on reasons that organization pursue “twinning” – creating digital twins of products, processes, and systems.  Preparing accurate and functionalized digital twins can yield a number of benefits, including:
  • reduced product development cost with fewer prototypes,
  • reduced manufacturing process development cost with less time and material waste,
  • faster time to market,
  • reduced marketing costs,
  • increased operational efficiency and optimized asset performance,
  • reduced cost of repairs and maintenance,
  • minimized unintended consequences of project execution,
  • increased personnel safety and reduced property damage,
  • increased product quality and consistency, and
  • reduced training costs.
      The benefits mentioned above are routinely achieved by various types of organization and application.  There may be other advantages that are specific to your organization’s pursuit of twinning.
     If you, or others in your organization, are not yet convinced of the value of digital twins, I recommend choosing a pilot project or “proof of concept” application.  Simply put, “start small.”  As confidence builds, the twins can be developed further, adding capability and sophistication.  Additional twinning projects can also be launched as new applications are identified.
      As is the case with most initiatives, a rapid transformation is unlikely; the resources are simply not available to achieve it.  Therefore, a hard-sell all-or-nothing approach is usually counterproductive.  A soft launch is much better than a stone wall; the pilot project approach is consistent with the philosophy of continuous improvement.
 
     If your organization is ready to launch or accelerate its twinning efforts, feel free to contact JayWink Solutions for guidance.  We have the cure for double vision!
 
References
[Link] “Promoting Digital Twin Applications for Sustainable Manufacturing.”  Navigant Research, August 15, 2019.
[Link] “Navigant Research Report Shows Digital Twins Can Aid Manufacturer Sustainability Efforts.”  Business Wire, November 19, 2019.
[Link] “Leveraging Digital Twin Approach for Sustainable Manufacturing.”  Navigant Research, 3Q 2019.
[Link] “Exploiting digital twin technology to meet sustainability goals.”  Smart Energy International, November 19, 2019.
[Link] “Supply Chain News: The Opportunities for Using Digital Twins in Manufacturing.”  Supply Chain Digest, August 30, 2017.
[Link] “How To Use Digital Twins To Disrupt Manufacturing.”  Digitalist Magazine , April 4, 2018.
[Link] “Digital Twins: Enabling Next-Gen Manufacturing.”  Digitalist Magazine, November 4, 2018.
[Link] “Network Of Digital Twins Series.”  Digitalist Magazine, June 21 – August 30, 2018.
[Link] “Using Digital Twins to Reduce Costs in Machine Commissioning.”  Design News, January 2, 2018.
[Link] “Cheat sheet: What is Digital Twin?”  IBM Internet of Things blog, January 4, 2018.
[Link] “What Is Digital Twin Technology - And Why Is It So Important?”  Forbes, March 6, 2017.
[Link] “Digital Twin.”  GE Digital.
[Link] “Digital Twins.”  Happiest Minds.
[Link] “What Is a Digital Twin?”  IoT for All, January 3, 2019.
[Link] “The Digital Twin: Powerful Use Cases for Industry 4.0.”  Medium, October 14, 2018.
[Link] “A Better Way: Finding Efficiencies in the Product Design and Manufacturing Process.”  Daily CADCAM, August 2, 2016.
[Link] “A Model City.”  PM Network, August 2019.
[Link] “Connecting the Digital Twin: From Idea Through Production, to Customers and Back.”  Tech Briefs, June 2018.
[Link] “Employing the Electrical Digital Twin to Mitigate Compliance Risk in Aerospace.”  Tech Briefs, December 2019.
[Link] “Leveraging Digital Twin Technology in Model-Based Systems Engineering.”  Systems, January 2019.
[Link] “Industry 4.0 and the digital twin:  Manufacturing meets its match.”  Deloitte Insights, May 12, 2017.

 
Jody W. Phelps, MSc, PMP®, MBA
Principal Consultant
JayWink Solutions, LLC
jody@jaywink.com
]]>