Choosing effective strategies for waging war against error in manufacturing and service operations requires an understanding of “the enemy.” The types of error to be combatted, the sources of these errors, and the amount of error that will be tolerated are important components of a functional definition (see Vol. I for an introduction).
The traditional view is that the amount of error to be accepted is defined by the specification limits of each characteristic of interest. Exceeding the specified tolerance of any characteristic immediately transforms the process output from “good” to “bad.” This is a very restrictive and misleading point of view. Much greater insight is provided regarding product performance and customer satisfaction by loss functions.
Myriad tools have been developed to aid collaboration of team members that are geographically separated. Temporally separated teams receive much less attention, despite this type of collaboration being paramount for success in many operations.
To achieve performance continuity in multi-shift operations, an effective pass-down process is required. Software is available to facilitate pass-down, but is not required for an effective process. The lowest-tech tools are often the best choices. A structured approach is the key to success – one that encourages participation, organization, and consistent execution.
There is some disagreement among quality professionals whether or not precontrol is a form of statistical process control (SPC). Like many tools prescribed by the Shainin System, precontrol’s statistical sophistication is disguised by its simplicity. The attitude of many seems to be that if it isn’t difficult or complex, it must not be rigorous.
Despite its simplicity, precontrol provides an effective means of process monitoring with several advantages (compared to control charting), including:
Lesser known than Six Sigma, but no less valuable, the Shainin System is a structured program for problem solving, variation reduction, and quality improvement. While there are similarities between these two systems, some key characteristics lie in stark contrast.
This installment of “The War on Error” introduces the Shainin System, providing background information and a description of its structure. Some common problem-solving tools will also be described. Finally, a discussion of the relationship between the Shainin System and Six Sigma will be presented, allowing readers to evaluate the potential for implementation of each in their organizations.
Despite the ubiquity of corporate Six Sigma programs and the intensity of their promotion, it is not uncommon for graduates to enter industry with little exposure and less understanding of their administration or purpose. Universities that offer Six Sigma instruction often do so as a separate certificate, unintegrated with any degree program. Students are often unaware of the availability or the value of such a certificate.
Upon entering industry, the tutelage of an invested and effective mentor is far from guaranteed. This can curtail entry-level employees’ ability to contribute to company objectives, or even to understand the conversations taking place around them. Without a structured introduction, these employees may struggle to succeed in their new workplace, while responsibility for failure is misplaced.
This installment of “The War on Error” aims to provide an introduction sufficient to facilitate entry into a Six Sigma environment. May it also serve as a refresher for those seeking reentry after a career change or hiatus.
While Vol. IV focused on variable gauge performance, this installment of “The War on Error” presents the study of attribute gauges. Requiring the judgment of human appraisers adds a layer of nuance to attribute assessment. Although we refer to attribute gauges, assessment may be made exclusively by the human senses. Thus, analysis of attribute gauges may be less intuitive or straightforward than that of their variable counterparts.
Conducting attribute gauge studies is similar to variable gauge R&R studies. The key difference is in data collection – rather than a continuum of numeric values, attributes are evaluated with respect to a small number of discrete categories. Categorization can be as simple as pass/fail; it may also involve grading a feature relative to a “stepped” scale. The scale could contain several gradations of color, transparency, or other visual characteristic. It could also be graded according to subjective assessments of fit or other performance characteristic.
While you may have been hoping for rest and relaxation, the title actually refers to Gauge R&R – repeatability and reproducibility. Gauge R&R, or GRR, comprises a substantial share of the effort required by measurement system analysis. Preparation and execution of a GRR study can be resource-intensive; taking shortcuts, however, is ill-advised. The costs of accepting an unreliable measurement system are long-term and far in excess of the short-term inconvenience caused by a properly-conducted analysis.
The focus here is the evaluation of variable gauges. Prerequisites of a successful GRR study will be described and methodological alternatives will be defined. Finally, interpretation of results and acceptance criteria will be discussed.
There is a “universal sequence for quality improvement,” according to the illustrious Joseph M. Juran, that defines the actions to be taken by any team to effect change. This includes teams pursuing error- and defect-reduction initiatives, variation reduction, or quality improvement by any other description.
Two of the seven steps of the universal sequence are “journeys” that the team must take to complete its problem-solving mission. The “diagnostic journey” and the “remedial journey” comprise the core of the problem-solving process and, thus, warrant particular attention.
Of the “eight wastes of lean,” the impacts of defects may be the easiest to understand. Most find the need to rework or replace a defective part or repeat a faulty service, and the subsequent costs, to be intuitive. The consequences of excess inventory, motion, or transportation, however, may require a deeper understanding of operations management to fully appreciate.
Conceptually, poka yoke (poh-kah yoh-keh) is one of the simplest lean tools; at least it was at its inception. Over time, use of the term has morphed and expanded, increasing misuse and confusion. The desire to appear enlightened and lean has led many to misappropriate the term, applying it to any mechanism used, or attempt made, to reduce defects. Poka yoke is often conflated with other process control mechanisms, including engineering controls and management controls.
To effectively reduce the occurrence of errors and resultant defects, it is imperative that process managers differentiate between poka yoke devices, engineering controls, and management controls. Understanding the capabilities and limitations of each allows appropriate actions to be taken to optimize the performance of any process.
Every organization wants error to be kept at a minimum. The dedication to fulfilling this desire, however, often varies according to the severity of consequences that are likely to result. Manufacturers miss delivery dates or ship faulty product; service providers fail to satisfy customers or damage their property; militaries lose battles or cause civilian casualties; all increase the cost of operations.
You probably have some sensitivity to the effects errors have on your organization and its partners. This series explores strategies, tools, and related concepts to help you effectively combat error and its effects. This is your induction; welcome to The War on Error.
Uses of augmented reality (AR) in various industries has been described in previous installments of “Augmented Reality” (Part 1, Part 2). In this installment, we will explore AR applications aimed at improving customer experiences in service operations. Whether creating new service options or improving delivery of existing services, AR has the potential to transform our interactions with service providers.
Front-office operations are mostly transparent due to customer participation. Customer presence is a key characteristic that differentiates services from the production of goods. Thus, technologies employed in service industries are often highly visible. This can be a blessing or a curse.
Some of the augmented reality (AR) applications most likely to attract popular attention were presented in “Part 1: An Introduction to the Technology.” When employed by manufacturing companies, AR is less likely to be experienced directly by the masses, but may have a greater impact on their lives. There may be a shift, however, as AR applications pervade product development and end-user activities.
In this installment, we look at AR applications in manufacturing industries that improve operations, including product development, quality control, and maintenance. Some are involved directly in the transformation of materials to end products, while others fill supporting roles. The potential impact on customer satisfaction that AR use provides will also be explored.
When we see or hear a reference to advanced technologies, many of us think of modern machinery used to perform physical processes, often without human intervention. CNC machining centers, robotic work cells, automated logistics systems, drones, and autonomous vehicles often eclipse other technologies in our visions. Digital tools are often overlooked simply because many of us find it difficult to visualize their use in the physical environments we regularly inhabit.
There is an entire class of digital tools that is rising in prominence, yet currently receives little attention in mainstream discourse: augmented reality (AR). There are valid applications of AR in varied industries. Increased awareness and understanding of these applications and the potential they possess for improving safety, quality, and productivity will help organizations identify opportunities to take the next step in digital transformation, building on predecessor technologies such as digital twins and virtual reality.
Since the dawn of the industrial age, manufacturers have sought ways to improve their operations. Over time, these attempts became more sophisticated, as techniques and models for the measurement of performance were developed.
Performance measurement for service industries is a much more recent development. Fortunately, much of the pioneering work in performance measurement undertaken in manufacturing industries is also applicable to service providers. However, some techniques require adaptation to the unique operating characteristics of service industries to provide the full benefit of the monitoring tools.
Overall Equipment Effectiveness (OEE) is a case in point. OEE could be used to track performance of equipment used to provide a service. It is much more informative of the core objectives of the operation, however, to use the analogous Overall Service Effectiveness (OSE). As the name implies, it provides a “big picture” view of the quality of service provided to customers.
“Beware the Metrics System – Part 1” presented potential advantages of implementing a metrics system, metric classifications, and warnings of potential pitfalls. This installment will provide examples from diverse industries and recommendations for development and management of metrics systems.
Every business uses metrics to assess various aspects of its performance. Some – usually the smallest and least diversified – may focus exclusively on the most basic financial measures. Others may be found at the opposite end of the spectrum, tracking a multitude of metrics across the entire organization – finance, operations, sales & marketing, human resources, research & development, and so on. The more extensively metricated organization is not necessarily more efficiently operated or more effectively managed, however. The administration of a metrics system incurs costs that must be balanced with its utility for it to be valuable to an organization.
An efficacious metrics system can greatly facilitate an organization’s management and improvement; a misguided one can be detrimental, in numerous ways, to individuals, teams, and the entire organization. The structure of a well-designed metrics system is influenced by the nature of the organization to be monitored – product vs. service, for-profit vs. nonprofit, public vs. private, large vs. small, start-up vs. mature, etc. Organizations often choose to present their metrics systems according to popular templates – Management by Objectives (MBO), Key Performance Indicators (KPI), Objectives and Key Results (OKR), or Balanced Scorecard – but may choose to create a unique system or a hybrid. No matter what form it takes, or what name it is given, the purpose of a metrics system remains constant: to monitor and control – that is, to manage – the organization’s performance according to criteria its leaders deem relevant.
The ability to formulate relevant, probing, often open-ended questions and present them at opportune times to appropriate individuals is incredibly valuable. Honing this skill will secure your reputation as a thought leader among product development, process development, or other project team members.
Many laud those who seem to have “all the answers,” but to what questions? Solving problems in your business is not a trivia game; having all the answers to questions that do not expose the underlying causes of issues or reveal improvement opportunities is of little value to your team. In most cases, it is much easier to find an answer to a question than it is to construct a question in such a way that maximizes the value of the answer.
Modern gurus of self-help have changed the narrative from “improve your weaknesses” to “play to your strengths.” However, the –abilities that drive performance in manufacturing and service operations require both approaches. A successful strategy includes extracting maximum value from well-developed –abilities and continually improving the weaker ones. The –abilities that drive performance include stability, reliability, profitability, and others. Some are more critical in a specific context; some have multiple interpretations; all deserve attention.
The –abilities that drive performance are straightforward concepts. The problem is that many managers and entrepreneurs lose sight of the basics while pursuing higher-level objectives. Let this post be a warning against this and a reminder of how solid fundamentals create a path to success.
In Part 1, the D•I•P•O•D Process Model and template were presented and explained. In this installment, an example deployment will be illustrated to demonstrate the variety of factors to be considered in an analysis. Practitioners are warned against developing a false sense of security or accomplishment in a special note on troubleshooting. Then, a number of common errors will be shared to help practitioners avoid them.
Well-designed models can be invaluable aids to development and analysis. 3D CAD models assist the detection of physical interferences in an assembly and the rapid calculation of stresses within its components. Mold-flow analysis helps injection molders predict processing problems. Various forms of simulation help us evaluate potential performance and identify risks before any products are manufactured, tooling built, routes established, or services performed.
Successful process planning, troubleshooting, and continuous improvement begins with applying fundamentals. Therefore, a model need not be as sophisticated as mold-flow or finite-element analysis requires to be useful, nor does it require high-performance computers with extensive computational capability. For many purposes, a simple diagram can provide the guidance needed for users to achieve breakout performance by focusing attention on what is relevant to the achievement of objectives, while clearing the clutter of distractions. The D•I•P•O•D Process Model is a great example of effective simplicity when used for process planning, development, or troubleshooting.
For a coherent discussion of culture to take place, it is important to define the term in its intended context. Social psychologist Goodwin Watson referred to ‘culture’ as “the total way of life characteristic of a somewhat homogeneous society of human beings,” differentiating its use in social science from the vernacular “refinement of taste in intellectual and aesthetic realms.”
Watson also quotes anthropologist Ralph Linton’s definition of ‘culture’ as “the configuration of learned behavior whose component elements are shared and transmitted by the members of a particular society.”
Key components of each definition will help us translate the concept of culture from a discussion of at-large society to one of a corporate environment.
Many manufacturing and service companies succumb to competitive pressure by embarking on misguided cost-reduction efforts, failing to take a holistic approach. To be clear, lean is the way to be; lean is not the same as cost reduction. Successful cost-reduction efforts consider the entire enterprise, the entire product life cycle, and, most importantly, the effects that changes will make on customers.
To avoid confusion, I will begin with a clarification. The full name of the document to which we refer generally as a FMEA (“fee-mah”) is Potential Failure Modes and Effects Analysis. This is not the ‘P’ to which I refer, however.
The title, “P” is for “Process,” is intended, first, to differentiate between a PFMEA (Process FMEA) and a DFMEA (Design FMEA). This differentiation refers specifically to product design and manufacture, as the distinction between them blurs when discussing service delivery systems.
Second, and the focal point of this post, is to differentiate between process and product. Many manufacturing organizations develop product FMEAs, mistakenly identifying them as process FMEAs.
Many times, solutions to operations problems can be found in unexpected places. In some cases, the opportunity for improvement is only recognized when a superior example is discovered. A fast-food restaurant could easily be overlooked by other service providers and manufacturers seeking best practices. The examples below aim to demonstrate why this potential benchmark should not be so quickly dismissed.
In order to implement an optimal solution to your company’s product development, capacity expansion, cost reduction, continuous improvement, or other project objective, your project team must be able to evaluate alternatives on four key qualitative measures. Each qualitative evaluation is informed by quantitative and pseudo-quantitative measures and other qualitative judgments that will vary by project and objective. Interpretation of these measures is required to reach logical conclusions regarding the optimality of proposed solutions.
Upon completion of the initial evaluations of alternatives, there may be no clear winner, one determined to be best in all aspects. In this situation, another round of evaluation must be conducted to determine the best trade-off of benefits to pursue. It is imperative that the project team consider the potential motivations of influencers; interpersonal conflicts, personal agendas, or other “office politics” can provide perverse incentives that jeopardize the team’s success. Focusing on the merits of each alternative will limit undue influence on the final decision, providing maximum benefit to the company, its employees, and its customers.
Particularly prevalent among project evaluation shortcuts is to simply look for the alternative with the lowest initial cost. Unfortunately, that number is often misleading, misunderstood, or misquoted. Confidence in the accuracy of cost estimates is important, but initial cost remains but one criterion among many.
Four characteristics that form the basis for selection of optimal solutions are outlined in the following sections.
If you'd like to contribute to this blog, please email firstname.lastname@example.org with your suggestions.
© JayWink Solutions, LLC