JayWink Solutions
  • Home
  • About
  • How We Work
  • Services
  • Blog
  • Contact

"The Third Degree"

A New Model for Measuring Service Effectiveness

10/23/2019

0 Comments

 
     Since the dawn of the industrial age, manufacturers have sought ways to improve their operations.  Over time, these attempts became more sophisticated, as techniques and models for the measurement of performance were developed.
     Performance measurement for service industries is a much more recent development.  Fortunately, much of the pioneering work in performance measurement undertaken in manufacturing industries is also applicable to service providers.  However, some techniques require adaptation to the unique operating characteristics of service industries to provide the full benefit of the monitoring tools.
     Overall Equipment Effectiveness (OEE) is a case in point.  OEE could be used to track performance of equipment used to provide a service.  It is much more informative of the core objectives of the operation, however, to use the analogous Overall Service Effectiveness (OSE).  As the name implies, it provides a “big picture” view of the quality of service provided to customers.
     At the outset, two potential issues with OSE should be addressed.  First, use of OSE warrants the same admonitions offered for its counterpart, OEE, in “Beware the Metrics System – Part 1,” particularly in regards to aggregation.  Correlation can be expected between OSE and other measures of customer satisfaction; in this way, it can serve as validation of survey results.  OSE can also be used for “snapshot” comparisons between operating locations, work crews, or other divisions of an organization.  Like, OEE, it cannot, however, be used to guide improvement efforts until the component measures are interrogated.
     Second, the relationship between “effectiveness” and “efficiency” must remain clear and consistent.  These terms are often, carelessly, used interchangeably.  However, they are not synonymous; the terms must be differentiated for the discussion of OSE to have consistent meaning.  To keep it simple, we will describe them in the following way:

     Effectiveness:  the level of satisfaction attained by providing a service to a customer.
     Efficiency:  the cost incurred by providing a service to a customer.

     Effectiveness without efficiency will threaten profit margins, cash flow, and other financial metrics.  High customer satisfaction will provide continuing demand for services and, therefore, opportunities to improve efficiency and profitability.
     Efficiency without effectiveness will alienate customers, lowering demand.  This, too, threatens profitability and longevity with a much greater challenge to recovery.  The time and expense required to lure customers back – and provide them better service than before – may be more than the company can bear.
     In short, service effectiveness is a fundamental requirement of a successful provider, while efficiency is addressed in continuous improvement efforts.  Fundamentals must be secure in order for other measures to be meaningful.
 
Prior Formulations of Service Effectiveness
     Consultancies of various types offer their views of service effectiveness.  Platitudes abound, but methods for achieving quantifiable, actionable results are difficult to find.
     TopLine Strategies published five metrics related to service effectiveness.  Two of them are pdeudoquantitative, requiring Likert-type scales or similar assessments to generate numerical ratings and track trends.  The remaining three have numerical values, but fail to offer genuine insight about customer satisfaction or how it could be improved.
     Great Brook Consulting deserves credit for calling attention to the difference between effectiveness and efficiency and the value of using both internal and external metrics.  The methods advocated for “measurement,” however, are qualitative and anecdotal in nature; at best, some could be transformed into pseudoquantitative measures.  Again, the ability to track trends and drive improvement efforts is severely limited.
     Baker Tilly identifies four dimensions of service effectiveness.  These dimensions could be correlated with the components of OSE, but no attempt is made to do so.  Brief statements of abstract concepts leave us without measurement techniques that can be used to understand and improve operations.
     Ron Moore presents an Overall Service Effectiveness calculation in his book Selecting the Right Manufacturing Improvement Tools.  The components chosen for this OEE-like calculation refer, primarily, to delivery performance.  This formulation exhibits a strong bias for high-volume discrete manufacturing.  This can be forgiven, of course, as the book is overtly manufacturing-centric.  It does mean, however, that our search for a service-oriented formulation continues.
     Academics can usually be counted on to weigh in on topics of interest to industry.  On the topic of Overall Service Effectiveness, their participation appears quite limited.  Eshetie Berhan of Addis Ababa University in Ethiopia has published a formulation of OSE, in the context of public transportation, that also parallels an OEE calculation.  Unfortunately, this formulation distorts the evaluation of service effectiveness by using inappropriate measures in the calculation.
     Berhan’s OSE formulation penalizes the bus company for low passenger counts.  There are too many external influences on passenger count to be an accurate reflection of service effectiveness.  These include availability and regulation of other modes of transportation, public awareness of bus stop locations and schedules, perceptions of public transportation, and others.  Market demand can be influenced by service effectiveness, but it is not a reliable measure of it.
     Other distortions are also inherent in this formulation.  Multiple penalties are assessed for schedule performance, though schedule performance is not directly assessed.  It would be more appropriate to account for waiting times and “speed loss” due to traffic congestion in the bus schedule.  Planned times of unavailability – lunch breaks, maintenance, etc. – are included in “downtime losses,” creating another distortionary penalty.  Customer satisfaction is treated as binary, though, in reality, it is a spectrum.
     Clearly, this is not a universal model for service operations.  A new formulation is needed; one with quantitative values that generate actionable insights.
 
Components of OSE
     Details of other viewpoints on OSE have been intentionally sparse to avoid confusion and conflation with the formulation to be presented here.  Readers interested in the details of prior formulations can find sources in “References” at the end of the post.
     The new formulation of Overall Service Effectiveness consists of three components:  availability (A), schedule compliance (S), and expectation fulfilment (E).
 
     Availability is essentially the same in OSE and OEE.  The difference in definition lies in the fact that the OEE calculation refers specifically to equipment availability, while OSE refers, more generally, to the availability of the service.  Availability of a service may or may not rely on availability of equipment.  It may also depend on availability of personnel, material, or information.  The formula for availability is:
Picture
The numerator includes all time when services are being performed and all time that services could be performed if there was demand.  The denominator includes all time that the provider intends to make services available (e.g. “hours of operation”).  The ratio is calculated with cumulative times in the period of interest (week, month, etc.).
     Potential for misuse exists by working overtime (increasing numerator) without claiming it as planned time (increasing denominator).  Doing so would artificially improve the availability; it could even exceed 100%, a practical impossibility.  Corresponding to this, the OSE value would be artificially increased, potentially masking issues in other areas.  If poor scheduling practices necessitate overtime, for example, false but consistent OSE values will not signal managers of an underlying problem.  The potential for misuse in this way also reinforces the need to understand each component of the calculation and avoid overreliance on the composite value alone.
 
     Schedule compliance may have different meanings for different service providers.  Although OSE would, ideally, evaluate all services with a single, universal formulation, providing a useful evaluation tool for the broadest range of services possible requires multiple definitions of schedule compliance; three are presented here.  The most appropriate formula to use depends on the nature of the service provided and customer sensitivities.  Each provider should use the definition of schedule compliance most closely correlated with the satisfaction of its customers.
            When performing “while-you-wait” services, the duration of the service may be the overriding concern.  This is often true of daytime appointments that require customers to take time off work.  In these cases, providers may choose to the following schedule compliance formula:
Picture
The numerator is the amount of time scheduled for the service – the promise made to the customer.  The denominator is the actual time the service required to complete.  The ratio is calculated with cumulative durations in the period of interest.
     If service is completed faster than planned, S > 1.  This is a valid calculation, but should trigger a review of scheduling practices, work standards, etc.  Frequent occurrence may indicate that estimates are being “padded” to improve performance metrics without improving performance.  Low performance may indicate that the standard is too aggressive or that training is required.
     Services that interrupt customers’ normal activities may require tracking of on-time performance for both starting and completing services to adequately assess effectiveness.  For example, a homeowner that must disconnect utilities in preparation for a service will be sensitive to both commencement and completion times.  For this type of situation, providers may choose to define schedule compliance as
Picture
The numerator is a simple tally of all services started as scheduled and all services completed as promised.  The value in the denominator is the total number of scheduled times (two per service).  The ratio is calculated with the cumulative number of services scheduled in the time period of interest.
     The final definition of schedule compliance is the most rigorous.  It is well-suited for services that require idling of equipment or personnel (i.e. downtime), or otherwise disrupt an organization’s operations.  Overhead work that requires a ladder to be placed at the entry is very disruptive for a business, especially if it happens to be a carry-out only restaurant!
     It is also applicable to highly-standardized work or fully-loaded schedules (i.e. no slack).  It takes the following form:
Picture
The provider’s commitment in time to complete the service is the numerator.  The denominator adds to this value the absolute values of the deviations from planned start and finish times.  Deviations from scheduled times are recorded as absolute values because events that occur early or late are disruptive to the customer.  The ratio is calculated with cumulative values for all services performed in the time period of interest.
     Simultaneously tracking performance of three key elements of scheduling – start time, completion time, and duration – incentivizes accuracy in scheduling and diligence in completing assignments as scheduled.  Slowing progress of work to avoid an early completion penalty is usually easier to detect than a “padded” schedule otherwise would be.
 
     Expectation fulfilment could also be called “specification compliance” or “expectation realization.”  A term including “expectation” is preferred to one including “specification” because some expectations of a service experience are unspecified.  If you hire a painter, you may specify “paint the house white and the shutters green.”  You will probably not specify “do not paint the window panes,” but the painter still needs to fulfill this unstated expectation.  Expectations in this context may also be called “key service attributes” – the characteristics of a service experience that influence purchasing decisions.
     Availability and schedule compliance can be calculated by the service provider using its own data.  Expectation fulfilment, in contrast, requires customer input.  This formulation uses a checklist to collect customer feedback.  The checklist is customized by the provider to reflect its offerings, customer base, etc.  The checklist includes foreseeable expectations – a TV in the waiting room, regular communication, no foul odors, debris removed, etc. – and a mechanism for collecting information on expectations that the provider had not predicted.  The additional expectations may be collected in a “general comments” section or requested explicitly as a component of feedback.
     Each item on the checklist is assigned a point value indicating its relative weight or significance to customer satisfaction.  For example, a vending machine in a waiting room may be 3 points (“important”), while free coffee may be 1 point (“nice to have”).  This means that the absence of a vending machine would be much more detrimental to customer satisfaction than the lack of free coffee.  Point values should also be assigned to the open feedback sections of the questionnaire.  Scoring these will require some interpretation; highly complementary comments may earn maximum points while negative comments earn zero points.
     Based on completed customer feedback checklists, expectation fulfilment is calculated as follows:
Picture
The numerator sums the point values of all items that customers indicated were satisfied expectations and the open feedback scores.  The maximum number of points that can be earned a on a checklist is in the denominator.  The ratio is calculated with cumulative scores of all checklists completed in the time period of interest.
 
     As is the case with OEE, OSE components are often expressed as percentages.
 
The OSE Composite
     As mentioned previously, Overall Service Effectiveness is a composite “score” analogous to OEE.  It is calculated as follows:
Picture
where A, Sx, and E are the components availability, schedule compliance definition of choice, and expectation fulfilment, respectively.  OSE is typically expressed as a percentage.
     OSE is best used to establish a baseline and evaluate improvement efforts.  There is no threshold value that separates success from failure, but the lower your OSE, the more concerned you should be and the more effort should be expended on improvement initiatives.  While “world-class” is already a disfavored term (see “Is World-Class Good Enough?”), it has even less value in the context of OSE than in many others.  Services are highly personalized and hyper-local, making global comparisons exceedingly difficult.  That said, 85% is a reasonable target minimum; performance ratings below this level should trigger investigations and process development activities.
 
Public Transportation Revisited
     To provide an example application of the Overall Service Effectiveness calculation proposed, we will revisit Berhan’s public transportation system.  Complete raw data is not available, requiring assumptions to complete the calculation.  Nonetheless, reconsidering this real-life example is a valid application of the OSE method of evaluation.
     For this example, “Month 1” data will be used to simulate a single day of operation.  The transportation system availability is calculated as follows:
Picture
     Recognizing that early and late arrivals and departures can be problematic for passengers, schedule compliance is calculated using
Picture
We will assume that each segment of the bus route is expected to be completed in 15 minutes.  Berhan’s data shows significant waiting times for various reasons, but data in the format required is unavailable.  Therefore, we will estimate that departure time deviations average 2 minutes and arrival time deviations average 1 minute.  Thus,
Picture
     We will use an expectation fulfilment value taken directly from Berhan’s passenger satisfaction level data for Month 1.  Without more detailed data or a checklist system in place, we cannot generate a better estimate of expectation fulfilment.  Therefore, we assume
Picture
     Overall Service Effectiveness is now calculated as the product of the three components:
Picture
     Looking at the composite OSE value alone, it is obvious that this transportation system is in dire need of improvement.  It is not obvious, however, what improvements are needed – equipment reliability, scheduling practices, or customer amenities.  Considering each component individually reveals that equipment reliability is good and scheduling practices could be improved, but are not the core issue.  The remaining component – expectation fulfilment or customer satisfaction score – has the greatest potential for improvement.  Any rating this low is critical and must be addressed immediately if the operation is to succeed.  It is logical to conclude that this is contributing to the low demand experienced by the bus line.  Specific research into perceived deficiencies (via checklist or other means) and increased marketing efforts may be required to recover from this condition.
 
     The goal of formulating this calculation of Overall Service Effectiveness is to give service providers a tool comparable to that used by manufacturers to help manage their businesses.  It carries with it the potential advantages and pitfalls characteristic of management metrics (see “Beware the Metrics System” – Part 1 and Part 2).  If its limitations are respected and practitioners remain vigilant – detecting, correcting, and deterring misuse – OSE can provide useful information for evaluating a business’ performance trends, contributing to its long-term success.
 
     Service providers that would like to implement performance-monitoring tools are invited to contact JayWink Solutions for further guidance or a formal consultation.
 
References
[Link] “The Metrics Behind Service Effectiveness,” TopLine Strategies.
[Link] “Measuring Service Effectiveness,” Great Brook Consulting.
[Link] “Service Effectiveness,” Baker Tilly.
[Link] Selecting the Right Manufacturing Improvement Tools:  What Tool?  When?  Ron Moore.  Elsevier, 2007.
[Link] Berhan, Eshetie. (2016). Overall Service Effectiveness on Urban Public Transport System in the City of Addis Ababa. British Journal of Applied Science & Technology. 12. 1-9. 10.9734/BJAST/2016/19679.
[Link] “Line Work,” Ricardo G. Fierro.  Quality Progress, May 2018.

 
Jody W. Phelps, MSc, PMP®, MBA
Principal Consultant
JayWink Solutions, LLC
jody@jaywink.com
0 Comments



Leave a Reply.

    Author

    If you'd like to contribute to this blog, please email jay@jaywink.com with your suggestions.

    Archives

    August 2020
    July 2020
    June 2020
    May 2020
    April 2020
    March 2020
    February 2020
    January 2020
    December 2019
    November 2019
    October 2019
    September 2019
    August 2019
    July 2019
    June 2019
    May 2019
    April 2019
    March 2019
    February 2019
    January 2019
    June 2018
    May 2018
    April 2018
    March 2018
    February 2018
    January 2018

    Categories

    All
    Career
    Customer Service
    Economics
    Lean Operations
    Operations Consulting
    Optimal Solutions
    Product/Service Development
    Project Management
    Sustainability
    Training

    RSS Feed

    Picture
    Picture
       © JayWink Solutions,  LLC

Site powered by Weebly. Managed by SiteGround
  • Home
  • About
  • How We Work
  • Services
  • Blog
  • Contact