The work balance chart is a critical component of a line balancing effort. It is both the graphical representation of the allocation of task time among operators, equipment, and transfers in a manufacturing or service process and a tool used to achieve an equal distribution.
Like other tools discussed in “The Third Degree,” a work balance chart may be referenced by other names in the myriad resources available. It is often called an operator balance chart, a valid moniker if only manual tasks are considered. It is also known as a Yamazumi Board. “Yamazumi” is Japanese for “stack up;” this term immediately makes sense when an example chart is seen, but requires an explanation to every non-Japanese speaker one encounters. Throughout the following presentation, “work balance chart,” or “WBC,” is used to refer to this tool and visual aid. This term is the most intuitive and characterizes the tool’s versatility in analyzing various forms of “work.”
0 Comments
A cause & effect diagram is best conceptualized as a specialized application and extension of an affinity diagram. Both types of diagram can be used for proactive (e.g. development planning) or reactive (e.g. problem-solving) purposes. Both use brainstorming techniques to collect information that is sorted into related groups. Where the two diverge is in the nature of relationships represented.
An affinity diagram may present several types of relationships among pieces of information collected. A cause & effect diagram, in contrast, is dedicated to a single relationship and its “direction,” namely, what is cause and what is effect. Most manufactured goods are produced and distributed to the marketplace where consumers are then sought. Services, in contrast, are not “produced” until there is a “consumer.” Simultaneous production and consumption is a hallmark of service; no inventory can be accumulated to compensate for fluctuating demand.
Instead, demand must be managed via predictable performance and efficiency. A service blueprint documents how a service is delivered, delineating customer actions and corresponding provider activity. Its pictorial format facilitates searches for improvements in current service delivery and identification of potential complementary offerings. A service blueprint can also be created proactively to optimize a delivery system before a service is made available to customers. Another way to Be A Zero – in a good, productive way – is to operate on a zero-based schedule. An organization’s time is the aggregate of individuals’ time and is often spent carelessly. When a member of an organization spends time on any endeavor, the organization’s time is being spent. When groups are formed, the expenditure of time multiplies. Time is the one resource that cannot be increased by persuasive salespeople, creative marketing, strategic partnerships, or other strategy; it must be managed.
“Everyone” in business knows that “time is money;” it only makes sense that time should be budgeted as carefully as financial resources. Like ZBB (Zero-Based Budgeting – Part 1), Zero-Based Scheduling (ZBS) can be approached in two ways; one ends at zero, the other begins there. The AIAG/VDA FMEA Handbook presents standard and alternate form sheets for Design, Process, and Supplemental FMEA. The formats presented do not preclude further customization, however. In this installment of the “FMEA” series, suggested modifications to the standard-format form sheets are presented. The rationale for changes is also provided to facilitate practitioners’ development of the most effective documentation for use in their organizations and by their customers.
No matter how useful or well-written a standard, guideline, or instruction is, there are often shortcuts, or “tricks,” to its efficient utilization. In the case of the aligned AIAG/VDA Failure Modes and Effects Analysis method, one trick is to use visual Action Priority tables.
In this installment of the “FMEA” series, use of visual tables to assign an Action Priority (AP) to a failure chain is introduced. Visual aids are used in many pursuits to provide clarity and increase efficiency. Visual AP tables are not included in the AIAG/VDA FMEA Handbook, but are derived directly from it to provide these benefits to FMEA practitioners. To conduct a Process FMEA according to AIAG/VDA alignment, the seven-step approach presented in Vol. VI (Aligned DFMEA) is used. The seven steps are repeated with a new focus of inquiry. Like the DFMEA, several system-, subsystem-, and component-level analyses may be required to fully understand a process.
Paralleling previous entries in the “FMEA” series, this installment presents the 7-step aligned approach applied to process analysis and the “Standard PFMEA Form Sheet.” Review of classical FMEA and aligned DFMEA is recommended prior to pursuing aligned PFMEA; familiarity with the seven steps, terminology used, and documentation formats will make aligned PFMEA more comprehensible. Preparations for Process Failure Modes and Effects Analysis (Process FMEA) (see Vol. II) occur, in large part, while the Design FMEA undergoes revision to develop and assign Recommended Actions. An earlier start, while ostensibly desirable, may result in duplicated effort. As a design evolves, the processes required to support it also evolve; allowing a design to reach a sufficient level of maturity to minimize process redesign is an efficient approach to FMEA.
In this installment of the “FMEA” series, how to conduct a “classical” Process FMEA (PFMEA) is presented as a close parallel to that of DFMEA (Vol. III). Each is prepared as a standalone reference for those engaged in either activity, but reading both is recommended to maintain awareness of the interrelationship of analyses. Prior to conducting a Failure Modes and Effects Analysis (FMEA), several decisions must be made. The scope and approach of analysis must be defined, as well as the individuals who will conduct the analysis and what expertise each is expected to contribute.
Information-gathering and planning are critical elements of successful FMEA. Adequate preparation reduces the time and effort required to conduct a thorough FMEA, thereby reducing lifecycle costs, as discussed in Vol. I. Anything worth doing is worth doing well. In an appropriate context, conducting an FMEA is worth doing; plan accordingly. Failure Modes and Effects Analysis (FMEA) is most commonly used in product design and manufacturing contexts. However, it can also be helpful in other applications, such as administrative functions and service delivery. Each application context may require refinement of definitions and rating scales to provide maximum clarity, but the fundamentals remain the same.
Several standards have been published defining the structure and content of Failure Modes and Effects Analyses (FMEAs). Within these standards, there are often alternate formats presented for portions of the FMEA form; these may also change with subsequent revisions of each standard. Add to this variety the diversity of industry and customer-specific requirements. Those unbeholden to an industry-specific standard are free to adapt features of several to create a unique form for their own purposes. The freedom to customize results in a virtually limitless number of potential variants. Few potential FMEA variants are likely to have broad appeal, even among those unrestricted by customer requirements. This series aims to highlight the most practical formats available, encouraging a level of consistency among practitioners that maintains Failure Modes and Effects Analysis as a portable skill. Total conformity is not the goal; presenting perceived best practices is. Committing resources to project execution is a critical responsibility for any organization or individual. Executing poor-performing projects can be disastrous for sponsors and organizations; financial distress, reputational damage, and sinking morale, among other issues, can result. Likewise, rejecting promising projects can limit an organization’s success by any conceivable measure.
The risks inherent in project selection compels sponsors and managers to follow an objective and methodical process to make decisions. Doing so leads to project selection decisions that are consistent, comparable, and effective. Review and evaluation of these decisions and their outcomes also becomes straightforward. Myriad tools have been developed to aid collaboration of team members that are geographically separated. Temporally separated teams receive much less attention, despite this type of collaboration being paramount for success in many operations.
To achieve performance continuity in multi-shift operations, an effective pass-down process is required. Software is available to facilitate pass-down, but is not required for an effective process. The lowest-tech tools are often the best choices. A structured approach is the key to success – one that encourages participation, organization, and consistent execution. Of the “eight wastes of lean,” the impacts of defects may be the easiest to understand. Most find the need to rework or replace a defective part or repeat a faulty service, and the subsequent costs, to be intuitive. The consequences of excess inventory, motion, or transportation, however, may require a deeper understanding of operations management to fully appreciate.
Conceptually, poka yoke (poh-kah yoh-keh) is one of the simplest lean tools; at least it was at its inception. Over time, use of the term has morphed and expanded, increasing misuse and confusion. The desire to appear enlightened and lean has led many to misappropriate the term, applying it to any mechanism used, or attempt made, to reduce defects. Poka yoke is often conflated with other process control mechanisms, including engineering controls and management controls. To effectively reduce the occurrence of errors and resultant defects, it is imperative that process managers differentiate between poka yoke devices, engineering controls, and management controls. Understanding the capabilities and limitations of each allows appropriate actions to be taken to optimize the performance of any process. “Fundamentals of Group Decision-Making” (Vol. IV) addressed structural attributes of decision-making groups. In this volume, we discuss some ways a group’s activities can be conducted. An organization may employ several different techniques, at different times, in order to optimize the decision-making process for a specific project or group.
The following selection of techniques is not comprehensive; organizations may discover others that are useful. Also, an organization may develop its own technique, often using a commonly-known technique as a foundation on which to create a unique process. The choice or development of a decision-making process must consider the positive and negative impacts – potential or realized – on decision quality, efficiency, and organizational performance factors. In business contexts, many decisions are made by a group instead of an individual. The same is true for other types of organization as well, such as nonprofits, educational institutions, and legislative bodies. Group decision-making has its advantages and its disadvantages. There are several other considerations also relevant to group decision-making, such as selecting members, defining decision rules, and choosing or developing a process to follow.
Successful group decision-making relies on a disciplined approach that proactively addresses common pitfalls. If an organization establishes a standard that defines how it will form groups and conduct its decision-making activities, it can reap the rewards of faster, higher-quality decisions, clearer expectations, less conflict, and greater cooperation. Uses of augmented reality (AR) in various industries has been described in previous installments of “Augmented Reality” (Part 1, Part 2). In this installment, we will explore AR applications aimed at improving customer experiences in service operations. Whether creating new service options or improving delivery of existing services, AR has the potential to transform our interactions with service providers.
Front-office operations are mostly transparent due to customer participation. Customer presence is a key characteristic that differentiates services from the production of goods. Thus, technologies employed in service industries are often highly visible. This can be a blessing or a curse. Some of the augmented reality (AR) applications most likely to attract popular attention were presented in “Part 1: An Introduction to the Technology.” When employed by manufacturing companies, AR is less likely to be experienced directly by the masses, but may have a greater impact on their lives. There may be a shift, however, as AR applications pervade product development and end-user activities.
In this installment, we look at AR applications in manufacturing industries that improve operations, including product development, quality control, and maintenance. Some are involved directly in the transformation of materials to end products, while others fill supporting roles. The potential impact on customer satisfaction that AR use provides will also be explored. When we see or hear a reference to advanced technologies, many of us think of modern machinery used to perform physical processes, often without human intervention. CNC machining centers, robotic work cells, automated logistics systems, drones, and autonomous vehicles often eclipse other technologies in our visions. Digital tools are often overlooked simply because many of us find it difficult to visualize their use in the physical environments we regularly inhabit.
There is an entire class of digital tools that is rising in prominence, yet currently receives little attention in mainstream discourse: augmented reality (AR). There are valid applications of AR in varied industries. Increased awareness and understanding of these applications and the potential they possess for improving safety, quality, and productivity will help organizations identify opportunities to take the next step in digital transformation, building on predecessor technologies such as digital twins and virtual reality. Digital Twin technology existed long before this term came into common use. Over time, existing technology has advanced, new applications and research initiatives have surfaced, and related technologies have been developed. This lack of centralized “ownership” of the term or technology has led to the proliferation of differing definitions of “digital twin.”
Some definitions focus on a specific application or technology – that developed by those offering the definition – presumably to coopt the term for their own purposes. Arguably, the most useful definition, however, is the broadest – one that encompasses the range of relevant technologies and applications, capturing their corresponding value to the field. To this end, I offer the following definition of digital twin: An electronic representation of a physical entity – product, machine, process, system, or facility – that aids understanding of the entity’s design, operation, capabilities, or condition. Given the amount of time people spend in meetings, organizations expend shockingly little effort to ensure that these meetings have value. Rarely is an employee – much less a volunteer – provided any formal instruction on leading or participating in meetings; most of us learn by observing the behavior of others. The low probability that those around us have been trained in optimal meeting practices renders this exercise equivalent to “the blind leading the blind.” The nature of these meetings is more likely to demonstrate the power structure of the organization than proper protocols.
Typical meetings suffer from a raft of problems that render them inefficient or ineffective. That is, they range from a moderate waste of time, while accomplishing something, to a total waste of time that accomplishes nothing. This need not be the case, however. Though an immediate overhaul may be an unrealistic expectation, incremental changes can be made to the way meetings are conducted, progressively increasing their value and developing a more efficient organization. Troubleshooting a system can be guided by instructions created by its developer or someone with extensive experience operating and maintaining similar systems. Without a specific context, however, a troubleshooting process can be very difficult to describe. There is an enormous number of variables that could potentially warrant consideration. The type of system (mechanical, power transmission, fluid power, electrical, motion control, etc.), operating environment (indoor, outdoor, arid, tropical, arctic, etc.), and severity of duty are only the beginning.
The vast array of systems and situations that could be encountered requires that troubleshooting be learned as a generalized skill. What tool set could be more general, more universally applicable, than our senses of sight, hearing, smell, taste, touch, and the most powerful of all, common sense? Spaghetti Diagrams The origin of the spaghetti diagram – when and where it was first used or who first recognized its resemblance to a plate of pasta – is not well known. What is clear is that this simple tool can be a very powerful representation of waste in various processes. An easily-understood visual presentation often provides the impetus needed for an organization to advance its improvement efforts.
While flow charts (see Vol. II) depict logical progressions through a process, spaghetti diagrams illustrate physical progressions. The movements tracked may be made by people, materials, paperwork, or other entities. As is the case with other maps, spaghetti diagrams can be created in very simple form, with information added as improvement efforts advance. Facility Layout or Floor Plan Of all business maps, the facility layout, or floor plan, is one of the most universal. If an organization has a physical presence – office, storefront, factory, etc. – it should have a documented layout that is updated as changes are made.
Documented layouts are most commonly prepared for manufacturing facilities because of their large footprints and large numbers of machines housed within them. Every type of organization, however, can benefit from a properly maintained layout drawing. Readily-available CAD software and trained users makes this a relatively simple task to complete. Flow Charts Introduced nearly a century ago, flow charts are one of the most basic mapping tools available; they are also very useful. As such, they have become ubiquitous, though the name used may vary slightly – flow diagram, process map, etc. When packaged with a PFMEA and Control Plan, it is a Process Flow Diagram (PFD). Extensions of the original flow chart have also been developed, identified with new aliases for what is, at its core, a process flow chart.
The variations need not be a distraction; a basic flow chart can be very useful to your organization. Once a basic chart is available, it can be expanded or modified to suit your needs as you learn and gain experience. The following discussion demonstrates this progression. “Beware the Metrics System – Part 1” presented potential advantages of implementing a metrics system, metric classifications, and warnings of potential pitfalls. This installment will provide examples from diverse industries and recommendations for development and management of metrics systems.
|
AuthorIf you'd like to contribute to this blog, please email jay@jaywink.com with your suggestions. Archives
November 2023
Categories
All
![]() © JayWink Solutions, LLC
|