Regardless of the decision-making model used, or how competent and conscientious a decision-maker is, making decisions involves risk. Some risks are associated with the individual or group making the decision. Others relate to the information used to make the decision. Still others are related to the way that this information is employed in the decision-making process.
Often, the realization of some risks increases the probability of realizing others; they are deeply intertwined. Fortunately, awareness of these risks and their interplay is often sufficient to mitigate them. To this end, several decision-making perils and predicaments are discussed below.
Myriad tools have been developed to aid collaboration of team members that are geographically separated. Temporally separated teams receive much less attention, despite this type of collaboration being paramount for success in many operations.
To achieve performance continuity in multi-shift operations, an effective pass-down process is required. Software is available to facilitate pass-down, but is not required for an effective process. The lowest-tech tools are often the best choices. A structured approach is the key to success – one that encourages participation, organization, and consistent execution.
There is some disagreement among quality professionals whether or not precontrol is a form of statistical process control (SPC). Like many tools prescribed by the Shainin System, precontrol’s statistical sophistication is disguised by its simplicity. The attitude of many seems to be that if it isn’t difficult or complex, it must not be rigorous.
Despite its simplicity, precontrol provides an effective means of process monitoring with several advantages (compared to control charting), including:
Lesser known than Six Sigma, but no less valuable, the Shainin System is a structured program for problem solving, variation reduction, and quality improvement. While there are similarities between these two systems, some key characteristics lie in stark contrast.
This installment of “The War on Error” introduces the Shainin System, providing background information and a description of its structure. Some common problem-solving tools will also be described. Finally, a discussion of the relationship between the Shainin System and Six Sigma will be presented, allowing readers to evaluate the potential for implementation of each in their organizations.
Despite the ubiquity of corporate Six Sigma programs and the intensity of their promotion, it is not uncommon for graduates to enter industry with little exposure and less understanding of their administration or purpose. Universities that offer Six Sigma instruction often do so as a separate certificate, unintegrated with any degree program. Students are often unaware of the availability or the value of such a certificate.
Upon entering industry, the tutelage of an invested and effective mentor is far from guaranteed. This can curtail entry-level employees’ ability to contribute to company objectives, or even to understand the conversations taking place around them. Without a structured introduction, these employees may struggle to succeed in their new workplace, while responsibility for failure is misplaced.
This installment of “The War on Error” aims to provide an introduction sufficient to facilitate entry into a Six Sigma environment. May it also serve as a refresher for those seeking reentry after a career change or hiatus.
While Vol. IV focused on variable gauge performance, this installment of “The War on Error” presents the study of attribute gauges. Requiring the judgment of human appraisers adds a layer of nuance to attribute assessment. Although we refer to attribute gauges, assessment may be made exclusively by the human senses. Thus, analysis of attribute gauges may be less intuitive or straightforward than that of their variable counterparts.
Conducting attribute gauge studies is similar to variable gauge R&R studies. The key difference is in data collection – rather than a continuum of numeric values, attributes are evaluated with respect to a small number of discrete categories. Categorization can be as simple as pass/fail; it may also involve grading a feature relative to a “stepped” scale. The scale could contain several gradations of color, transparency, or other visual characteristic. It could also be graded according to subjective assessments of fit or other performance characteristic.
While you may have been hoping for rest and relaxation, the title actually refers to Gauge R&R – repeatability and reproducibility. Gauge R&R, or GRR, comprises a substantial share of the effort required by measurement system analysis. Preparation and execution of a GRR study can be resource-intensive; taking shortcuts, however, is ill-advised. The costs of accepting an unreliable measurement system are long-term and far in excess of the short-term inconvenience caused by a properly-conducted analysis.
The focus here is the evaluation of variable gauges. Prerequisites of a successful GRR study will be described and methodological alternatives will be defined. Finally, interpretation of results and acceptance criteria will be discussed.
There is a “universal sequence for quality improvement,” according to the illustrious Joseph M. Juran, that defines the actions to be taken by any team to effect change. This includes teams pursuing error- and defect-reduction initiatives, variation reduction, or quality improvement by any other description.
Two of the seven steps of the universal sequence are “journeys” that the team must take to complete its problem-solving mission. The “diagnostic journey” and the “remedial journey” comprise the core of the problem-solving process and, thus, warrant particular attention.
Of the “eight wastes of lean,” the impacts of defects may be the easiest to understand. Most find the need to rework or replace a defective part or repeat a faulty service, and the subsequent costs, to be intuitive. The consequences of excess inventory, motion, or transportation, however, may require a deeper understanding of operations management to fully appreciate.
Conceptually, poka yoke (poh-kah yoh-keh) is one of the simplest lean tools; at least it was at its inception. Over time, use of the term has morphed and expanded, increasing misuse and confusion. The desire to appear enlightened and lean has led many to misappropriate the term, applying it to any mechanism used, or attempt made, to reduce defects. Poka yoke is often conflated with other process control mechanisms, including engineering controls and management controls.
To effectively reduce the occurrence of errors and resultant defects, it is imperative that process managers differentiate between poka yoke devices, engineering controls, and management controls. Understanding the capabilities and limitations of each allows appropriate actions to be taken to optimize the performance of any process.
Every organization wants error to be kept at a minimum. The dedication to fulfilling this desire, however, often varies according to the severity of consequences that are likely to result. Manufacturers miss delivery dates or ship faulty product; service providers fail to satisfy customers or damage their property; militaries lose battles or cause civilian casualties; all increase the cost of operations.
You probably have some sensitivity to the effects errors have on your organization and its partners. This series explores strategies, tools, and related concepts to help you effectively combat error and its effects. This is your induction; welcome to The War on Error.
Previous volumes of “Making Decisions” have alluded to voting processes, but were necessarily lacking in detail on this component of group decision-making. This volume remedies that deficiency, discussing some common voting systems in use for group decision-making. Some applications and issues that plague these systems are also considered.
Although “voting” is more often associated with political elections than decision-making, the two are perfectly compatible. An election, after all, is simply a group (constituency) voting to decide (elect) which alternative (candidate) to implement (inaugurate). Many descriptions of voting systems are given in the context of political elections; substituting key words, as shown above, often provides sufficient understanding to employ them for organizational decision-making.
“Fundamentals of Group Decision-Making” (Vol. IV) addressed structural attributes of decision-making groups. In this volume, we discuss some ways a group’s activities can be conducted. An organization may employ several different techniques, at different times, in order to optimize the decision-making process for a specific project or group.
The following selection of techniques is not comprehensive; organizations may discover others that are useful. Also, an organization may develop its own technique, often using a commonly-known technique as a foundation on which to create a unique process. The choice or development of a decision-making process must consider the positive and negative impacts – potential or realized – on decision quality, efficiency, and organizational performance factors.
In business contexts, many decisions are made by a group instead of an individual. The same is true for other types of organization as well, such as nonprofits, educational institutions, and legislative bodies. Group decision-making has its advantages and its disadvantages. There are several other considerations also relevant to group decision-making, such as selecting members, defining decision rules, and choosing or developing a process to follow.
Successful group decision-making relies on a disciplined approach that proactively addresses common pitfalls. If an organization establishes a standard that defines how it will form groups and conduct its decision-making activities, it can reap the rewards of faster, higher-quality decisions, clearer expectations, less conflict, and greater cooperation.
While the Rational Model provides a straightforward decision-making aid that is easy to understand and implement, it is not well-suited, on its own, to highly complex decisions. A large number of decision criteria may create numerous tradeoff opportunities that are not easily comparable. Likewise, disparate performance expectations of alternatives may make the “best” choice elusive. In these situations, an additional evaluation tool is needed to ensure a rational decision.
The scenario described above requires Multi-criteria Analysis (MCA). One form of MCA is Analytic Hierarchy Process (AHP). In this installment of “Making Decisions,” application of AHP is explained and demonstrated via a common example – a purchasing decision to source a new production machine.
The rational model of decision-making feels familiar, intuitive, even obvious to most of us. This is true despite the fact that few of us follow a well-defined process consistently. Inconsistency in the process is reflected in poor decision quality, failure to achieve objectives, or undesired or unexpected outcomes.
Versions of the rational model are available from various sources, though many do not identify the process by this name. Ranging from four to eight steps, the description of each varying significantly, these sources offer a wide variety of perspectives on the classic sequential decision-making process. Fundamentally, however, each is simply an interpretation of the rational model of decision-making.
Reviewing past installments of “The Third Degree” in preparation for the update post “Hindsight is 20/20; Foresight is 2020,” I realized that there had been a significant oversight. This post is aimed at correcting that oversight and filling the void I’m sure we have all felt.
In “Of Delegating and Dumping,” a compare-and-contrast exploration of the two managerial styles, I referenced “The Dumper’s Creed,” but had not presented it. Until now!
The advent of a new year inspires a great deal of reflection and anticipation. Many of us will evaluate our personal and professional progress over the past 12 months and set new goals for the upcoming year. The same is true for “The Third Degree;” this installment will look back at some posts to provide additional resources related to the topics discussed. It will also look ahead to preview topics to be covered in future posts.
Given the amount of time people spend in meetings, organizations expend shockingly little effort to ensure that these meetings have value. Rarely is an employee – much less a volunteer – provided any formal instruction on leading or participating in meetings; most of us learn by observing the behavior of others. The low probability that those around us have been trained in optimal meeting practices renders this exercise equivalent to “the blind leading the blind.” The nature of these meetings is more likely to demonstrate the power structure of the organization than proper protocols.
Typical meetings suffer from a raft of problems that render them inefficient or ineffective. That is, they range from a moderate waste of time, while accomplishing something, to a total waste of time that accomplishes nothing. This need not be the case, however. Though an immediate overhaul may be an unrealistic expectation, incremental changes can be made to the way meetings are conducted, progressively increasing their value and developing a more efficient organization.
Introduced nearly a century ago, flow charts are one of the most basic mapping tools available; they are also very useful. As such, they have become ubiquitous, though the name used may vary slightly – flow diagram, process map, etc. When packaged with a PFMEA and Control Plan, it is a Process Flow Diagram (PFD). Extensions of the original flow chart have also been developed, identified with new aliases for what is, at its core, a process flow chart.
The variations need not be a distraction; a basic flow chart can be very useful to your organization. Once a basic chart is available, it can be expanded or modified to suit your needs as you learn and gain experience. The following discussion demonstrates this progression.
“Beware the Metrics System – Part 1” presented potential advantages of implementing a metrics system, metric classifications, and warnings of potential pitfalls. This installment will provide examples from diverse industries and recommendations for development and management of metrics systems.
Every business uses metrics to assess various aspects of its performance. Some – usually the smallest and least diversified – may focus exclusively on the most basic financial measures. Others may be found at the opposite end of the spectrum, tracking a multitude of metrics across the entire organization – finance, operations, sales & marketing, human resources, research & development, and so on. The more extensively metricated organization is not necessarily more efficiently operated or more effectively managed, however. The administration of a metrics system incurs costs that must be balanced with its utility for it to be valuable to an organization.
An efficacious metrics system can greatly facilitate an organization’s management and improvement; a misguided one can be detrimental, in numerous ways, to individuals, teams, and the entire organization. The structure of a well-designed metrics system is influenced by the nature of the organization to be monitored – product vs. service, for-profit vs. nonprofit, public vs. private, large vs. small, start-up vs. well-established, etc. Organizations often choose to present their metrics systems according to popular templates – Management by Objectives (MBO), Key Performance Indicators (KPI), Objectives and Key Results (OKR), or Balanced Scorecard – but may choose to create a unique system or a hybrid. No matter what form it takes, or what name it is given, the purpose of a metrics system remains constant: to monitor and control – that is, to manage – the organization’s performance according to criteria its leaders deem relevant.
The ability to formulate relevant, probing, often open-ended questions and present them at opportune times to appropriate individuals is incredibly valuable. Honing this skill will secure your reputation as a thought leader among product development, process development, or other project team members.
Many laud those who seem to have “all the answers,” but to what questions? Solving problems in your business is not a trivia game; having all the answers to questions that do not expose the underlying causes of issues or reveal improvement opportunities is of little value to your team. In most cases, it is much easier to find an answer to a question than it is to construct a question in such a way that maximizes the value of the answer.
Modern gurus of self-help have changed the narrative from “improve your weaknesses” to “play to your strengths.” However, the –abilities that drive performance in manufacturing and service operations require both approaches. A successful strategy includes extracting maximum value from well-developed –abilities and continually improving the weaker ones. The –abilities that drive performance include stability, reliability, profitability, and others. Some are more critical in a specific context; some have multiple interpretations; all deserve attention.
The –abilities that drive performance are straightforward concepts. The problem is that many managers and entrepreneurs lose sight of the basics while pursuing higher-level objectives. Let this post be a warning against this and a reminder of how solid fundamentals create a path to success.
In Part 1, the D•I•P•O•D Process Model and template were presented and explained. In this installment, an example deployment will be illustrated to demonstrate the variety of factors to be considered in an analysis. Practitioners are warned against developing a false sense of security or accomplishment in a special note on troubleshooting. Then, a number of common errors will be shared to help practitioners avoid them.
Well-designed models can be invaluable aids to development and analysis. 3D CAD models assist the detection of physical interferences in an assembly and the rapid calculation of stresses within its components. Mold-flow analysis helps injection molders predict processing problems. Various forms of simulation help us evaluate potential performance and identify risks before any products are manufactured, tooling built, routes established, or services performed.
Successful process planning, troubleshooting, and continuous improvement begins with applying fundamentals. Therefore, a model need not be as sophisticated as mold-flow or finite-element analysis requires to be useful, nor does it require high-performance computers with extensive computational capability. For many purposes, a simple diagram can provide the guidance needed for users to achieve breakout performance by focusing attention on what is relevant to the achievement of objectives, while clearing the clutter of distractions. The D•I•P•O•D Process Model is a great example of effective simplicity when used for process planning, development, or troubleshooting.
If you'd like to contribute to this blog, please email firstname.lastname@example.org with your suggestions.
© JayWink Solutions, LLC