Myriad tools have been developed to aid collaboration of team members that are geographically separated. Temporally separated teams receive much less attention, despite this type of collaboration being paramount for success in many operations.
To achieve performance continuity in multi-shift operations, an effective pass-down process is required. Software is available to facilitate pass-down, but is not required for an effective process. The lowest-tech tools are often the best choices. A structured approach is the key to success – one that encourages participation, organization, and consistent execution.
Of the “eight wastes of lean,” the impacts of defects may be the easiest to understand. Most find the need to rework or replace a defective part or repeat a faulty service, and the subsequent costs, to be intuitive. The consequences of excess inventory, motion, or transportation, however, may require a deeper understanding of operations management to fully appreciate.
Conceptually, poka yoke (poh-kah yoh-keh) is one of the simplest lean tools; at least it was at its inception. Over time, use of the term has morphed and expanded, increasing misuse and confusion. The desire to appear enlightened and lean has led many to misappropriate the term, applying it to any mechanism used, or attempt made, to reduce defects. Poka yoke is often conflated with other process control mechanisms, including engineering controls and management controls.
To effectively reduce the occurrence of errors and resultant defects, it is imperative that process managers differentiate between poka yoke devices, engineering controls, and management controls. Understanding the capabilities and limitations of each allows appropriate actions to be taken to optimize the performance of any process.
“Fundamentals of Group Decision-Making” (Vol. IV) addressed structural attributes of decision-making groups. In this volume, we discuss some ways a group’s activities can be conducted. An organization may employ several different techniques, at different times, in order to optimize the decision-making process for a specific project or group.
The following selection of techniques is not comprehensive; organizations may discover others that are useful. Also, an organization may develop its own technique, often using a commonly-known technique as a foundation on which to create a unique process. The choice or development of a decision-making process must consider the positive and negative impacts – potential or realized – on decision quality, efficiency, and organizational performance factors.
In business contexts, many decisions are made by a group instead of an individual. The same is true for other types of organization as well, such as nonprofits, educational institutions, and legislative bodies. Group decision-making has its advantages and its disadvantages. There are several other considerations also relevant to group decision-making, such as selecting members, defining decision rules, and choosing or developing a process to follow.
Successful group decision-making relies on a disciplined approach that proactively addresses common pitfalls. If an organization establishes a standard that defines how it will form groups and conduct its decision-making activities, it can reap the rewards of faster, higher-quality decisions, clearer expectations, less conflict, and greater cooperation.
Uses of augmented reality (AR) in various industries has been described in previous installments of “Augmented Reality” (Part 1, Part 2). In this installment, we will explore AR applications aimed at improving customer experiences in service operations. Whether creating new service options or improving delivery of existing services, AR has the potential to transform our interactions with service providers.
Front-office operations are mostly transparent due to customer participation. Customer presence is a key characteristic that differentiates services from the production of goods. Thus, technologies employed in service industries are often highly visible. This can be a blessing or a curse.
Some of the augmented reality (AR) applications most likely to attract popular attention were presented in “Part 1: An Introduction to the Technology.” When employed by manufacturing companies, AR is less likely to be experienced directly by the masses, but may have a greater impact on their lives. There may be a shift, however, as AR applications pervade product development and end-user activities.
In this installment, we look at AR applications in manufacturing industries that improve operations, including product development, quality control, and maintenance. Some are involved directly in the transformation of materials to end products, while others fill supporting roles. The potential impact on customer satisfaction that AR use provides will also be explored.
When we see or hear a reference to advanced technologies, many of us think of modern machinery used to perform physical processes, often without human intervention. CNC machining centers, robotic work cells, automated logistics systems, drones, and autonomous vehicles often eclipse other technologies in our visions. Digital tools are often overlooked simply because many of us find it difficult to visualize their use in the physical environments we regularly inhabit.
There is an entire class of digital tools that is rising in prominence, yet currently receives little attention in mainstream discourse: augmented reality (AR). There are valid applications of AR in varied industries. Increased awareness and understanding of these applications and the potential they possess for improving safety, quality, and productivity will help organizations identify opportunities to take the next step in digital transformation, building on predecessor technologies such as digital twins and virtual reality.
Digital Twin technology existed long before this term came into common use. Over time, existing technology has advanced, new applications and research initiatives have surfaced, and related technologies have been developed. This lack of centralized “ownership” of the term or technology has led to the proliferation of differing definitions of “digital twin.”
Some definitions focus on a specific application or technology – that developed by those offering the definition – presumably to coopt the term for their own purposes. Arguably, the most useful definition, however, is the broadest – one that encompasses the range of relevant technologies and applications, capturing their corresponding value to the field. To this end, I offer the following definition of digital twin:
An electronic representation of a physical entity – product, machine, process, system, or facility – that aids understanding of the entity’s design, operation, capabilities, or condition.
Given the amount of time people spend in meetings, organizations expend shockingly little effort to ensure that these meetings have value. Rarely is an employee – much less a volunteer – provided any formal instruction on leading or participating in meetings; most of us learn by observing the behavior of others. The low probability that those around us have been trained in optimal meeting practices renders this exercise equivalent to “the blind leading the blind.” The nature of these meetings is more likely to demonstrate the power structure of the organization than proper protocols.
Typical meetings suffer from a raft of problems that render them inefficient or ineffective. That is, they range from a moderate waste of time, while accomplishing something, to a total waste of time that accomplishes nothing. This need not be the case, however. Though an immediate overhaul may be an unrealistic expectation, incremental changes can be made to the way meetings are conducted, progressively increasing their value and developing a more efficient organization.
Troubleshooting a system can be guided by instructions created by its developer or someone with extensive experience operating and maintaining similar systems. Without a specific context, however, a troubleshooting process can be very difficult to describe. There is an enormous number of variables that could potentially warrant consideration. The type of system (mechanical, power transmission, fluid power, electrical, motion control, etc.), operating environment (indoor, outdoor, arid, tropical, arctic, etc.), and severity of duty are only the beginning.
The vast array of systems and situations that could be encountered requires that troubleshooting be learned as a generalized skill. What tool set could be more general, more universally applicable, than our senses of sight, hearing, smell, taste, touch, and the most powerful of all, common sense?
The origin of the spaghetti diagram – when and where it was first used or who first recognized its resemblance to a plate of pasta – is not well known. What is clear is that this simple tool can be a very powerful representation of waste in various processes. An easily-understood visual presentation often provides the impetus needed for an organization to advance its improvement efforts.
While flow charts (see Vol. II) depict logical progressions through a process, spaghetti diagrams illustrate physical progressions. The movements tracked may be made by people, materials, paperwork, or other entities. As is the case with other maps, spaghetti diagrams can be created in very simple form, with information added as improvement efforts advance.
Facility Layout or Floor Plan
Of all business maps, the facility layout, or floor plan, is one of the most universal. If an organization has a physical presence – office, storefront, factory, etc. – it should have a documented layout that is updated as changes are made.
Documented layouts are most commonly prepared for manufacturing facilities because of their large footprints and large numbers of machines housed within them. Every type of organization, however, can benefit from a properly maintained layout drawing. Readily-available CAD software and trained users makes this a relatively simple task to complete.
Introduced nearly a century ago, flow charts are one of the most basic mapping tools available; they are also very useful. As such, they have become ubiquitous, though the name used may vary slightly – flow diagram, process map, etc. When packaged with a PFMEA and Control Plan, it is a Process Flow Diagram (PFD). Extensions of the original flow chart have also been developed, identified with new aliases for what is, at its core, a process flow chart.
The variations need not be a distraction; a basic flow chart can be very useful to your organization. Once a basic chart is available, it can be expanded or modified to suit your needs as you learn and gain experience. The following discussion demonstrates this progression.
“Beware the Metrics System – Part 1” presented potential advantages of implementing a metrics system, metric classifications, and warnings of potential pitfalls. This installment will provide examples from diverse industries and recommendations for development and management of metrics systems.
Every business uses metrics to assess various aspects of its performance. Some – usually the smallest and least diversified – may focus exclusively on the most basic financial measures. Others may be found at the opposite end of the spectrum, tracking a multitude of metrics across the entire organization – finance, operations, sales & marketing, human resources, research & development, and so on. The more extensively metricated organization is not necessarily more efficiently operated or more effectively managed, however. The administration of a metrics system incurs costs that must be balanced with its utility for it to be valuable to an organization.
An efficacious metrics system can greatly facilitate an organization’s management and improvement; a misguided one can be detrimental, in numerous ways, to individuals, teams, and the entire organization. The structure of a well-designed metrics system is influenced by the nature of the organization to be monitored – product vs. service, for-profit vs. nonprofit, public vs. private, large vs. small, start-up vs. mature, etc. Organizations often choose to present their metrics systems according to popular templates – Management by Objectives (MBO), Key Performance Indicators (KPI), Objectives and Key Results (OKR), or Balanced Scorecard – but may choose to create a unique system or a hybrid. No matter what form it takes, or what name it is given, the purpose of a metrics system remains constant: to monitor and control – that is, to manage – the organization’s performance according to criteria its leaders deem relevant.
Successful managers are – or need to quickly become – effective delegators. Many managers convince themselves, and sometimes others, that they are effectively delegating by assigning many tasks and giving many orders. Unfortunately, however, this is most often indicative of an antithetical situation. Effective delegation is a skill, like any other, that can be learned, practiced, and honed. To do so, managers must understand the difference between delegating and dumping.
To thoroughly develop this understanding, it is useful to consider the differences between delegating and dumping as they relate to five phases: Assignment, Support, Follow-up (or Progress Check), Feedback, and Recurrence.
Many manufacturing and service companies succumb to competitive pressure by embarking on misguided cost-reduction efforts, failing to take a holistic approach. To be clear, lean is the way to be; lean is not the same as cost reduction. Successful cost-reduction efforts consider the entire enterprise, the entire product life cycle, and, most importantly, the effects that changes will make on customers.
To avoid confusion, I will begin with a clarification. The full name of the document to which we refer generally as a FMEA (“fee-mah”) is Potential Failure Modes and Effects Analysis. This is not the ‘P’ to which I refer, however.
The title, “P” is for “Process,” is intended, first, to differentiate between a PFMEA (Process FMEA) and a DFMEA (Design FMEA). This differentiation refers specifically to product design and manufacture, as the distinction between them blurs when discussing service delivery systems.
Second, and the focal point of this post, is to differentiate between process and product. Many manufacturing organizations develop product FMEAs, mistakenly identifying them as process FMEAs.
Last week’s post offered general advice regarding the work of “experts.” Writing it reminded me of a set of articles that I feel is worthy of some direct attention, as the theme becomes increasingly prevalent. Variations on “How to Read 100 Books a Year,” these articles demonstrate the misconception inherent in many of the “expert” articles of the type discussed. The misconception is that the advice given is universal.
Example articles include:
[Link] “How to Read 100 Books in a Year” – Thrive Global
[Link] “How to Read 100+ Books in One Year” – WikiHow
[Link] "How to Read 100 Books a Year" – Observer
[Link] "How to read 100 books in a year (and still have a life)" – Forrest Brazeal
[Link] "How I Read 100 Books in One Year (and How You Can, Too)" – BookRiot
[Link] "5 Steps To Reading 100 Books A Year – AuthenticGrowth
Innumerable “experts” are now able to share their wisdom widely, at the click of a button, thanks to the expansion of social media and blogging platforms. The ability to publish one’s assertions, however, does not ensure the validity or value of the content. There are few limitations on what one can publish.
Consider an example at the opposite end of this spectrum – peer-reviewed journals. Despite the rigorous review process, authors are occasionally discredited after publication of an article. Falsified data, inappropriate research protocols, or inconsistent conclusions may be discovered.
The journal scenario is a good news/bad news situation. The good news is that there is a third party (journal editor) responsible for maintaining the integrity and quality of published research that will publish a correction. The bad news is that, many times, the correction may not reach those that rely on the original work, trusting its accuracy. Also, reproductions of the article may not include the correction as an update or attachment.
Unfiltered content does not provide even this imperfect safety net. Perhaps one can leave a comment online, disputing an article’s content or conclusions, but where does that lead? Often, nowhere.
Many times, solutions to operations problems can be found in unexpected places. In some cases, the opportunity for improvement is only recognized when a superior example is discovered. A fast-food restaurant could easily be overlooked by other service providers and manufacturers seeking best practices. The examples below aim to demonstrate why this potential benchmark should not be so quickly dismissed.
Most of us have had both positive and negative experiences that have taught us valuable lessons. Those that set a good example for the people around them are the leaders we are taught to emulate; they are written about and admired by many, deservedly so. However, it is often the poorest examples, or most negative experiences, that teach us the most profound and longest-lasting lessons. Turning these lessons into positive statements that you put into practice will greatly improve your team’s chances for unity and success.
Based on this insight, gained as a member of numerous project teams, I have compiled several lessons that have been reinforced by involvement in – ahem – “imperfect” project executions. Though the “rules” described below were derived from process improvement, product launch, crisis response, and similar projects, many are universal. The phrasing or terminology may differ, but the spirit of each is appropriate for any team environment.
A January 16, 2018 article in USA Today reports that tightening labor markets have “provided a financial boon to many full-time employees, who are notching lots of overtime…”
In the January 18, 2018 episode of NPR’s The Indicator from Planet Money podcast, called “The Beigies,” was a story about a manufacturing company in the northeast US, originally published in the Federal Reserve’s Beige Book.
"Another industrial firm had 20 unfilled openings in a plant with 100 employees and said they were making up for it with significant overtime. When asked why they didn’t increase wages to fill the openings, the contact said they would have to pay all the existing workers more which would be uneconomic."
To sustain successful operations, projects should be undertaken in an efficient and transparent manner. Efficiency improves the affordability of projects, increasing opportunities for growth. Transparency allows a broader range of input to refine a project plan, lowers resistance to change, and increases the probability of success.
The six steps below outline a process that can be used to ensure efficiency and transparency in operations projects. With each new initiative launched, these steps should be refined, applying experience gained in previous projects, to tune the process to the dynamics of your organization. After a few iterations, creating and implementing optimal solutions will begin to feel natural, and anything less, anathema.
If you haven’t already done so, I recommend reading 4 Characteristics of an Optimal Solution before proceeding to the six steps. As each step is executed, bear in mind how the activities described aid in achieving the four characteristics desired. If activity begins to stray from the process goals, reassess and adjust the tasks, participants, objectives, and evaluation methods to reestablish and maintain alignment.
If you'd like to contribute to this blog, please email email@example.com with your suggestions.
© JayWink Solutions, LLC