Choosing effective strategies for waging war against error in manufacturing and service operations requires an understanding of “the enemy.” The types of error to be combatted, the sources of these errors, and the amount of error that will be tolerated are important components of a functional definition (see Vol. I for an introduction).
The traditional view is that the amount of error to be accepted is defined by the specification limits of each characteristic of interest. Exceeding the specified tolerance of any characteristic immediately transforms the process output from “good” to “bad.” This is a very restrictive and misleading point of view. Much greater insight is provided regarding product performance and customer satisfaction by loss functions.
Regardless of the decision-making model used, or how competent and conscientious a decision-maker is, making decisions involves risk. Some risks are associated with the individual or group making the decision. Others relate to the information used to make the decision. Still others are related to the way that this information is employed in the decision-making process.
Often, the realization of some risks increases the probability of realizing others; they are deeply intertwined. Fortunately, awareness of these risks and their interplay is often sufficient to mitigate them. To this end, several decision-making perils and predicaments are discussed below.
Myriad tools have been developed to aid collaboration of team members that are geographically separated. Temporally separated teams receive much less attention, despite this type of collaboration being paramount for success in many operations.
To achieve performance continuity in multi-shift operations, an effective pass-down process is required. Software is available to facilitate pass-down, but is not required for an effective process. The lowest-tech tools are often the best choices. A structured approach is the key to success – one that encourages participation, organization, and consistent execution.
Lesser known than Six Sigma, but no less valuable, the Shainin System is a structured program for problem solving, variation reduction, and quality improvement. While there are similarities between these two systems, some key characteristics lie in stark contrast.
This installment of “The War on Error” introduces the Shainin System, providing background information and a description of its structure. Some common problem-solving tools will also be described. Finally, a discussion of the relationship between the Shainin System and Six Sigma will be presented, allowing readers to evaluate the potential for implementation of each in their organizations.
Despite the ubiquity of corporate Six Sigma programs and the intensity of their promotion, it is not uncommon for graduates to enter industry with little exposure and less understanding of their administration or purpose. Universities that offer Six Sigma instruction often do so as a separate certificate, unintegrated with any degree program. Students are often unaware of the availability or the value of such a certificate.
Upon entering industry, the tutelage of an invested and effective mentor is far from guaranteed. This can curtail entry-level employees’ ability to contribute to company objectives, or even to understand the conversations taking place around them. Without a structured introduction, these employees may struggle to succeed in their new workplace, while responsibility for failure is misplaced.
This installment of “The War on Error” aims to provide an introduction sufficient to facilitate entry into a Six Sigma environment. May it also serve as a refresher for those seeking reentry after a career change or hiatus.
There is a “universal sequence for quality improvement,” according to the illustrious Joseph M. Juran, that defines the actions to be taken by any team to effect change. This includes teams pursuing error- and defect-reduction initiatives, variation reduction, or quality improvement by any other description.
Two of the seven steps of the universal sequence are “journeys” that the team must take to complete its problem-solving mission. The “diagnostic journey” and the “remedial journey” comprise the core of the problem-solving process and, thus, warrant particular attention.
Of the “eight wastes of lean,” the impacts of defects may be the easiest to understand. Most find the need to rework or replace a defective part or repeat a faulty service, and the subsequent costs, to be intuitive. The consequences of excess inventory, motion, or transportation, however, may require a deeper understanding of operations management to fully appreciate.
Conceptually, poka yoke (poh-kah yoh-keh) is one of the simplest lean tools; at least it was at its inception. Over time, use of the term has morphed and expanded, increasing misuse and confusion. The desire to appear enlightened and lean has led many to misappropriate the term, applying it to any mechanism used, or attempt made, to reduce defects. Poka yoke is often conflated with other process control mechanisms, including engineering controls and management controls.
To effectively reduce the occurrence of errors and resultant defects, it is imperative that process managers differentiate between poka yoke devices, engineering controls, and management controls. Understanding the capabilities and limitations of each allows appropriate actions to be taken to optimize the performance of any process.
Every organization wants error to be kept at a minimum. The dedication to fulfilling this desire, however, often varies according to the severity of consequences that are likely to result. Manufacturers miss delivery dates or ship faulty product; service providers fail to satisfy customers or damage their property; militaries lose battles or cause civilian casualties; all increase the cost of operations.
You probably have some sensitivity to the effects errors have on your organization and its partners. This series explores strategies, tools, and related concepts to help you effectively combat error and its effects. This is your induction; welcome to The War on Error.
Previous volumes of “Making Decisions” have alluded to voting processes, but were necessarily lacking in detail on this component of group decision-making. This volume remedies that deficiency, discussing some common voting systems in use for group decision-making. Some applications and issues that plague these systems are also considered.
Although “voting” is more often associated with political elections than decision-making, the two are perfectly compatible. An election, after all, is simply a group (constituency) voting to decide (elect) which alternative (candidate) to implement (inaugurate). Many descriptions of voting systems are given in the context of political elections; substituting key words, as shown above, often provides sufficient understanding to employ them for organizational decision-making.
“Fundamentals of Group Decision-Making” (Vol. IV) addressed structural attributes of decision-making groups. In this volume, we discuss some ways a group’s activities can be conducted. An organization may employ several different techniques, at different times, in order to optimize the decision-making process for a specific project or group.
The following selection of techniques is not comprehensive; organizations may discover others that are useful. Also, an organization may develop its own technique, often using a commonly-known technique as a foundation on which to create a unique process. The choice or development of a decision-making process must consider the positive and negative impacts – potential or realized – on decision quality, efficiency, and organizational performance factors.
In business contexts, many decisions are made by a group instead of an individual. The same is true for other types of organization as well, such as nonprofits, educational institutions, and legislative bodies. Group decision-making has its advantages and its disadvantages. There are several other considerations also relevant to group decision-making, such as selecting members, defining decision rules, and choosing or developing a process to follow.
Successful group decision-making relies on a disciplined approach that proactively addresses common pitfalls. If an organization establishes a standard that defines how it will form groups and conduct its decision-making activities, it can reap the rewards of faster, higher-quality decisions, clearer expectations, less conflict, and greater cooperation.
While the Rational Model provides a straightforward decision-making aid that is easy to understand and implement, it is not well-suited, on its own, to highly complex decisions. A large number of decision criteria may create numerous tradeoff opportunities that are not easily comparable. Likewise, disparate performance expectations of alternatives may make the “best” choice elusive. In these situations, an additional evaluation tool is needed to ensure a rational decision.
The scenario described above requires Multi-criteria Analysis (MCA). One form of MCA is Analytic Hierarchy Process (AHP). In this installment of “Making Decisions,” application of AHP is explained and demonstrated via a common example – a purchasing decision to source a new production machine.
The rational model of decision-making feels familiar, intuitive, even obvious to most of us. This is true despite the fact that few of us follow a well-defined process consistently. Inconsistency in the process is reflected in poor decision quality, failure to achieve objectives, or undesired or unexpected outcomes.
Versions of the rational model are available from various sources, though many do not identify the process by this name. Ranging from four to eight steps, the description of each varying significantly, these sources offer a wide variety of perspectives on the classic sequential decision-making process. Fundamentally, however, each is simply an interpretation of the rational model of decision-making.
Given the importance of decision-making in our personal and professional lives, the topic receives shockingly little attention. The potential consequences of low-quality decisions warrant extensive courses to build critical skills, yet few of us ever receive significant instruction in decision-making during formal education, as part of on-the-job training, or from mentors. It is even under the radar of many conscientious autodidacts. The “Making Decisions” series of “The Third Degree” aims to raise the profile of this critical skillset and provide sufficient information to improve readers’ decision-making prowess.
It is helpful, when beginning to study a new topic, to familiarize oneself with some of the unique terminology that will be encountered. This installment of “Making Decisions” will serve as a glossary for reference throughout the series. It also provides a preview of the series content and a directory of published volumes.
Reviewing past installments of “The Third Degree” in preparation for the update post “Hindsight is 20/20; Foresight is 2020,” I realized that there had been a significant oversight. This post is aimed at correcting that oversight and filling the void I’m sure we have all felt.
In “Of Delegating and Dumping,” a compare-and-contrast exploration of the two managerial styles, I referenced “The Dumper’s Creed,” but had not presented it. Until now!
Given the amount of time people spend in meetings, organizations expend shockingly little effort to ensure that these meetings have value. Rarely is an employee – much less a volunteer – provided any formal instruction on leading or participating in meetings; most of us learn by observing the behavior of others. The low probability that those around us have been trained in optimal meeting practices renders this exercise equivalent to “the blind leading the blind.” The nature of these meetings is more likely to demonstrate the power structure of the organization than proper protocols.
Typical meetings suffer from a raft of problems that render them inefficient or ineffective. That is, they range from a moderate waste of time, while accomplishing something, to a total waste of time that accomplishes nothing. This need not be the case, however. Though an immediate overhaul may be an unrealistic expectation, incremental changes can be made to the way meetings are conducted, progressively increasing their value and developing a more efficient organization.
Introduced nearly a century ago, flow charts are one of the most basic mapping tools available; they are also very useful. As such, they have become ubiquitous, though the name used may vary slightly – flow diagram, process map, etc. When packaged with a PFMEA and Control Plan, it is a Process Flow Diagram (PFD). Extensions of the original flow chart have also been developed, identified with new aliases for what is, at its core, a process flow chart.
The variations need not be a distraction; a basic flow chart can be very useful to your organization. Once a basic chart is available, it can be expanded or modified to suit your needs as you learn and gain experience. The following discussion demonstrates this progression.
“Beware the Metrics System – Part 1” presented potential advantages of implementing a metrics system, metric classifications, and warnings of potential pitfalls. This installment will provide examples from diverse industries and recommendations for development and management of metrics systems.
Every business uses metrics to assess various aspects of its performance. Some – usually the smallest and least diversified – may focus exclusively on the most basic financial measures. Others may be found at the opposite end of the spectrum, tracking a multitude of metrics across the entire organization – finance, operations, sales & marketing, human resources, research & development, and so on. The more extensively metricated organization is not necessarily more efficiently operated or more effectively managed, however. The administration of a metrics system incurs costs that must be balanced with its utility for it to be valuable to an organization.
An efficacious metrics system can greatly facilitate an organization’s management and improvement; a misguided one can be detrimental, in numerous ways, to individuals, teams, and the entire organization. The structure of a well-designed metrics system is influenced by the nature of the organization to be monitored – product vs. service, for-profit vs. nonprofit, public vs. private, large vs. small, start-up vs. mature, etc. Organizations often choose to present their metrics systems according to popular templates – Management by Objectives (MBO), Key Performance Indicators (KPI), Objectives and Key Results (OKR), or Balanced Scorecard – but may choose to create a unique system or a hybrid. No matter what form it takes, or what name it is given, the purpose of a metrics system remains constant: to monitor and control – that is, to manage – the organization’s performance according to criteria its leaders deem relevant.
Always on the lookout for useful or clever analogies that facilitate understanding of complex systems or ideas, some discoveries are made with great pleasure and some disappoint. The law of averages demands it.
The jigsaw puzzle is no stranger to analogy-building. One example appeared earlier this year in Plant Services’ “Human Capital” column (“The Jigsaw Puzzle of Reliability,” March 2019). Unfortunately, this is one that left me underwhelmed. Perhaps space limitations precluded full development of the analogy; the author’s forthcoming book may correct this. In any case, this installment of “The Third Degree” is my attempt to redeem the venerable jigsaw puzzle analogy.
The ability to formulate relevant, probing, often open-ended questions and present them at opportune times to appropriate individuals is incredibly valuable. Honing this skill will secure your reputation as a thought leader among product development, process development, or other project team members.
Many laud those who seem to have “all the answers,” but to what questions? Solving problems in your business is not a trivia game; having all the answers to questions that do not expose the underlying causes of issues or reveal improvement opportunities is of little value to your team. In most cases, it is much easier to find an answer to a question than it is to construct a question in such a way that maximizes the value of the answer.
Modern gurus of self-help have changed the narrative from “improve your weaknesses” to “play to your strengths.” However, the –abilities that drive performance in manufacturing and service operations require both approaches. A successful strategy includes extracting maximum value from well-developed –abilities and continually improving the weaker ones. The –abilities that drive performance include stability, reliability, profitability, and others. Some are more critical in a specific context; some have multiple interpretations; all deserve attention.
The –abilities that drive performance are straightforward concepts. The problem is that many managers and entrepreneurs lose sight of the basics while pursuing higher-level objectives. Let this post be a warning against this and a reminder of how solid fundamentals create a path to success.
In Part 1, the D•I•P•O•D Process Model and template were presented and explained. In this installment, an example deployment will be illustrated to demonstrate the variety of factors to be considered in an analysis. Practitioners are warned against developing a false sense of security or accomplishment in a special note on troubleshooting. Then, a number of common errors will be shared to help practitioners avoid them.
Well-designed models can be invaluable aids to development and analysis. 3D CAD models assist the detection of physical interferences in an assembly and the rapid calculation of stresses within its components. Mold-flow analysis helps injection molders predict processing problems. Various forms of simulation help us evaluate potential performance and identify risks before any products are manufactured, tooling built, routes established, or services performed.
Successful process planning, troubleshooting, and continuous improvement begins with applying fundamentals. Therefore, a model need not be as sophisticated as mold-flow or finite-element analysis requires to be useful, nor does it require high-performance computers with extensive computational capability. For many purposes, a simple diagram can provide the guidance needed for users to achieve breakout performance by focusing attention on what is relevant to the achievement of objectives, while clearing the clutter of distractions. The D•I•P•O•D Process Model is a great example of effective simplicity when used for process planning, development, or troubleshooting.
Successful managers are – or need to quickly become – effective delegators. Many managers convince themselves, and sometimes others, that they are effectively delegating by assigning many tasks and giving many orders. Unfortunately, however, this is most often indicative of an antithetical situation. Effective delegation is a skill, like any other, that can be learned, practiced, and honed. To do so, managers must understand the difference between delegating and dumping.
To thoroughly develop this understanding, it is useful to consider the differences between delegating and dumping as they relate to five phases: Assignment, Support, Follow-up (or Progress Check), Feedback, and Recurrence.
If you'd like to contribute to this blog, please email firstname.lastname@example.org with your suggestions.
© JayWink Solutions, LLC