Benchmarking Industrial Services: A Methodology Outline

Benchmarking is a powerful tool for improving performance. In industrial services, it has been underused partly because services are more difficult to benchmark and partly because companies didn’t see a pressing need for it. But as service becomes more of a competitive focal point managements face pressure to up their game and benchmarking is gaining in importance. Nevertheless, benchmarking exercises need solid methodological underpinnings to be successful. This article shows how it can be done.

The purpose of benchmarking is to discover, explain and close performance gaps between companies or business units. Improvement takes place through innovative adaptation of practices and processes that others have pioneered or perfected –an iterative process of accelerated learning and capacity building. Through well-designed benchmarking exercises all types of companies can benefit -from experienced top performers to newcomers and relative laggards. This is because it is very rare for one company to outperform all others in everything. Even businesses with overall relatively low performance scores can occasionally outshine their more accomplished peers in some areas.   

Benchmarking is a powerful, yet often underappreciated management tool. It is not, for example, merely a passive high-level comparison of financial metrics, KPIs, or operating statistics. Such comparisons undoubtedly have their uses -not least as reality checks. But they are not sufficient to identify strengths or weaknesses with the necessary specificity and granularity, and, even more so, their causes or how to fix them where necessary. The explanatory power of such simple comparative exercises is limited. The analytical and learning elements of benchmarking are neglected, yet they are key. A benchmarking exercise needs to be sufficiently rigorous, deep and geared towards discovering and understanding cause and effect relationships. It must deliver insights and knowledge and requires committed participants with full organizational backing. The benefits, however, can be substantial and sustained.

One factor complicating benchmarking initiatives is that businesses are inherently diverse and do not act in a vacuum, so their differences and operating context need to be considered. Companies that address different customer types, make different products, operate in different industrial or economic cycles –which would therefore have different objectives and strategies– will also have naturally different performance levels with regards to various metrics and KPIs. In fact, KPIs that are important in one context may be less relevant in another. For example, a company making complex one-off products for a limited number of customers will logically focus on the high technical skills of its field service engineers which will probably be strongly correlated with performance. In contrast, a company making standard products with a broad installed base will naturally find a higher correlation with excellence in service logistics at scale.

Nevertheless, these differences are neither an insurmountable obstacle in designing benchmarking programs with wide participation nor should they impede learning opportunities from peers in different industries and markets. But the adaptation of practices and processes needs to take place while taking into account the operating environment and the specific circumstances and objectives of each participant.

Henry Ford, the founder of Ford Motor Company, reportedly observed how workers in slaughterhouses after doing their jobs gave a shove to the carcass hanging from a monorail to send it to the next station. This sparked the idea for the assembly line.

Eliji Toyoda, Toyota’s most famous CEO, observed how American grocers in the 1950s stocked their stores at night just in time for customers in the morning. That gave him the idea for Toyota’s Just-in-Time production system.

Benchmarking originally became formalized and popular in the late 1980s through its inclusion in the Malcolm Baldridge Award criteria (the world’s foremost operational performance and quality award at the time). According to a regular survey by Bain & Company, a management consultancy, it rightly remains one of the world’s most popular and effective management tools. Traditionally, however, (industrial/technical) service businesses have not made as much use of benchmarking as product businesses. For example, in manufacturing benchmarking is now inherently baked into many business practices: Companies regularly take apart (reverse engineer) competitive products to understand their features, design engineering, materials, and cost-performance ratios. They derive their supply chains with high granularity and compare them to their own and they monitor each other’s production practices. Services, however, are mostly intangible and difficult to reverse engineer. They are less automated and standardized. In addition, there is extremely limited published data available (financial or otherwise) on service operations or performance. To establish how well they are doing relative to peers and competitors, many companies generally rely on hearsay or, in some cases, expensive research reports, which, however, are often of questionable quality due to insufficient rigor and a lack of both a systematic methodology as well as relevant data. But these are not the only reasons for the absence of dynamism in service benchmarking. In the past, (after-sales) service has been considered a secondary, static business in a mostly captive market with limited competitive intensity. Service was expected to automatically generate revenue by virtue of the installed base. Financial and operational performance was taken for granted -and, in any case, was highly dependent on spare parts -which for many companies is still the case. So, investing in service-specific market or competitive intelligence and data or benchmarking exercises, was often considered of limited value and not expedient -notwithstanding the fact that as a result top – and service managements, were left without anchor information on how their markets were developing, their business evolving or how they were really performing in the market relative to others. This made the setting of objectives and strategy, resource allocation, and investment decisions mostly arbitrary. Appraising management performance or designing incentives was difficult and improvement efforts based overmuch on theory or gut feel rather than data. But this was mostly tolerated -though forward-thinking companies did far more.

As, however, service has been gaining in importance relative to products -initially because the size of the installed base began to dwarf annual product sales making many companies in different industries far more dependent on service for revenue growth and profitability (and attracting pure-play service third parties to enter the markets thereby increasing competitive intensity) – top managements started to focus more on service performance and how to improve it. In addition, the idea of the “servitized” economy (everything-as-a-service models) has been gaining traction with new offerings and business models emerging on a continuous basis.

As a result, the service business has been moving from the edge towards the center of companies’ activities and is becoming a competitive focal point. Managers find themselves in the position of having to know much more much faster about service markets, their competitive structure, and their dynamics, and what makes service businesses really “fly”. And they must learn how to juggle with service-specific objectives, understand critical success factors and learn how to drive performance in service operations. Benchmarking is one of the best available tools for this purpose.

A key challenge in managing service businesses is that no generally accepted, empirically backed framework for service excellence actually exists. Critical success factors and performance indicators are usually unverified and poorly understood -particularly as regards their relative importance. It follows that the reasons some companies do very well while others do not is often unclear. That is not to say that there are no smart ideas and practical approaches to managing service, but they are often generic and one-size-fits-all, or, in other cases, apply only in specific circumstances. Cause and effect relationships are frequently fuzzy. For many strategic and operational management concepts (e.g., in manufacturing, supply chain, marketing) robust methodologies have been developed over time -but this is less the case for service. Benchmarking offers a useful way forward here: An opportunity not only to uncover and quantify cause-and-effect relationships but also, by extension, to build a reliable, upgradeable framework for achieving service excellence.

For a benchmarking exercise to be meaningful and successful, it needs a clear starting point and a sense of direction -a common thread and a logical structure that underpins it. It makes no sense to measure and compare every possible indicator without understanding its relevance or how it fits into the overall picture. For this reason, a benchmarking exercise needs to start with a hypothesis for a framework of the critical factors that drive service performance -broken down to reasonable detail. We call this a “Service Excellence Framework” and present the basis for one here. During the benchmarking process the critical factors are operationalized and quantified as much as possible then tested for relevance and to see how well they correlate with the relative performance of the participants -both individually and in combination. This helps identify and explain causal relationships and gauge their relative weighting and contribution. From the original hypothesis, the framework can then be iteratively further developed and solidified through additional data from more participants. Differences in performance and gaps to close can be identified at a sufficiently granular level. Improvement actions can be defined, quantified, costed, and prioritized.  This process is repeated at regular time intervals and is flanked by analysis and deep dives (in workshops) to explain, understand and challenge results and to update or upgrade the framework as required.
Our service excellence framework (which should be applied at the level of Porter‘s Strategic Business Unit) – consists of 6 main drivers or overarching CSFs which in turn are themselves impacted by further factors and they, in turn, by others. Here is a brief look at each one:

 
Service Engagement

This factor determines how “central” service is to the company, in other words, how much and with what intensity a company engages in and with service. This engagement factor is by nature a fuzzy factor, can, however, be quantified (operationalized) in several ways. For example, it can be postulated that indicators for engagement are (strength of) “leadership”, “empowerment”, “insight”, “positioning”, “investment”, and (management) “attention”. These indicators can then be defined in ways that allow measurement or grading. For example, “leadership” could be proxied through measures such as years of service experience and the degree of service representation at top management level as well as the breadth of functional expertise available. “Attention” can be defined as a mixture of how closely top management follows up on the service activity beyond the figures and the intensity with which it communicates about service -both internally and externally. Measures of this may be how often service is mentioned in annual reports, analyst presentations, or press releases; How much (and how systematically) information management collects about the service markets in which the service business competes. Whether the service business has won any awards, or to what extent the company invests in the marketing of the service activity. Similar considerations guide the definition of the other indicators.

Service Portfolio

This factor covers the breadth and comprehensiveness of the service offerings portfolio and the degree of market coverage and penetration that the business achieves. Service offerings, however, may differ considerably and it may not be appropriate to compare a business with relatively simple offerings (e.g. repairs) offered broadly with one that has a narrow market focus (e.g. selected customers) but provides services of high sophistication (e.g. productivity services). Such differences must be captured and taken into account (weighted) in benchmarking exercises otherwise the informational value and relevance of the results is diminished. While it does not necessarily follow that a company will perform better as it moves towards more sophisticated offerings and broader market coverage it is postulated in this example that strong service businesses will both increase their market coverage and seek to differentiate with improved offerings, a hypothesis which may be confirmed or denied by the benchmarking results.

To assess the service portfolio, we again evaluate 6 indicators – the portfolio mix “evolution” (i.e., the dynamism with which the company brings new service offerings to market); The “coverage” and “penetration” as noted above, the “strength” of the offerings, how much the company “invests” in developing offerings and whether the contribution of the offerings is adequate to provide sufficient return on investment. For example, the “strength” of the offerings is a measure of the benefits of the offerings relative to competing alternatives, incl. on price and performance or features and to what extent the company stands behind them through warranties or guarantees. In the same vein, penetration can be measured by looking at customer wallet share, market shares in different regions or for different types of offerings, or the extent to which the service business achieves sales and contribution targets.

Sales Capacity

This factor measures the ability of the service business to successfully make sales and encompasses indicators such as the sales mix, sales performance over time, the strength of the salesforce, the effectiveness of sales management, the strength of customer relationships and pricing. As in the case of the offerings portfolio, it is important to understand what type of sales a business is pursuing. In this case, we have indicatively classified service sales into three types: i) “captive sales” -these are sales over which the customer has extremely limited, if any, discretion -in other words the customer must buy the service from the supplier (e.g., an urgent repair which only the original supplier can provide). These types of sales are essentially non-competitive, and they happen solely because the supplier has delivered the product. For this type of (traditional) sales virtually no sales capability is required, and they grow over time as a function of the installed base (though as the equipment ages, other service providers may evolve the capability to provide the service). In contrast, ii) sales of bundled and iii) advanced services over which the customer does have discretion are open to competition and require proactive action and a sales capability. Examples of such sales are bundled service contracts of standard or high sophistication. They have higher cyclicality, higher uncertainty and are much more price driven. This factor focuses solely on this type of sales.

In terms of some of the indicators, the “strength of customer relationships”, for example, can be understood and measured as the length of time a relationship has existed, the wallet share of the business for the customer, or how often the customer buys from the business without formal calls to tender. The indicator “sales performance” can be seen as to what extent the business is succeeding in expanding its customer base or improving its hit ratio. And the indicator “pricing” can be seen as the extent to which the company uses pricing as a lever for sales -for example adjusting pricing to ability or willingness to pay, or -in the event of cost-plus pricing- allocating the right costs to the offerings or preventing cost leakage. Equivalent measures can be used for the other indicators.

Delivery

The “Delivery” factor determines the effectiveness with which the business delivers on its offerings and commitments, particularly in terms of solving customers’ problems – arguably the overriding task in after-sales service- while containing costs. It encompasses the indicators such as “problem-solving”, “productivity and capacity utilization”, “cost containment”, “customer satisfaction”, “technical competence” and “digital tools”. Essentially this factor refers to operational excellence but with specific service adaptations or considerations, for example dealing with the fact that problems with a low frequency usually require substantial effort to resolve, or that a small number of problems cause most of the cost -in a rough 20:80 Pareto distribution. Furthermore, it should take into account how some service businesses engaging with complex products (and therefore difficult problems to solve) should place a higher emphasis on technical management, while others engaging with standard products should emphasize logistics (incl. for the service workforce, whether in the field or in workshops).

Indicators such as problem-solving ability could be quantified through measures such as time to response or time to resolution across the board or for specific sub-types of problems. Productivity and utilization, notoriously difficult to measure in service, could be proxied through rework or warranty rate or performance against standards. An indicator such as technical competence can be measured as an engineering experience ratio, but also by looking at compensation levels of service engineers of different seniority relative to other positions of similar seniority in the organization. Finally, the use of digital tools can be measured both directly and indirectly by using such proxies as the ratio of available and accessible technical knowhow online, or the quantity of data generated within the service business relative to other business units in the organization.

Innovation

Sustained ability to innovate provides a competitive advantage and better chances of success overall. Yet while innovation is relatively easy to define in products -for example, in terms of functionality, features, performance, technology used and even price, this is more difficult in service business. In general, service innovations fall within two broad categories, the first to do with structures and back-end processes (e.g., management systems and automation to manage parts or field service logistics and customer calls) and the second with improving customer outcomes through new offerings (e.g., outcome-based contracts using technology such as computer-based vibration analysis (1980s) or predictive algorithms and Augmented Reality (now)). But another way to classify innovation for benchmarking purposes is on a continuum -from incremental (e.g., new ways of knowledge management, better parts logistics algorithms) to pushing boundaries (e.g., predictive algorithms and remote interventions) to different grades of disruptive innovation (e.g., crowdsourcing field service, de-skilling maintenance or 3D-printing spare parts) -where the entire business needs to be recast or rethought as its central model and assumptions are no longer valid. Such innovations often disrupt not only individual companies but often the entire market by attracting new entrants or by providing considerable advantage to first movers through so-called “winner-takes-all” and “network effects”. These have been observed in other markets -not yet fully in service markets, though they may be around the corner. A service business strong in innovation not only (probably) performs better currently but is also better positioned to lead and capture value out of disruptive change as it emerges.

The innovation performance indicators in our framework include “ambition”, “people”, “leadership”, “process”, “investment” and “output”. For example, “ambition” can be graded on whether the business is an early adopter, early majority, or follower in deploying new technologies, concepts, or service offerings as well as by whether it has articulated and communicated innovation objectives. “Leadership” may be graded on whether innovation is strategically used for competitive advantage while “output” measures the rate of adoption of proposed innovations.

Spare Parts and Logistics

Due to its outsized contribution in revenue and profitability, spare parts often constitute a business within a business in traditional service organizations. In fact, the spare parts business is often the subject of extensive benchmarking exercises on its own. For this reason, we make the case that it should be a separate area of focus while considering the need for integration with the rest of the service business. In our framework, critical factors for spare parts include “logistics response”, “ease of access” for customers, “cost containment”, “advanced services”, “pricing” and the adoption of “digital tools” – all geared towards optimizing customer service while managing costs and price to maximize profitability.  Most indicators for this factor can be easily quantified – for example, “logistics response” relates to the ability of the company to provide a rapid and trouble-free response to inquiries and delivery execution, perhaps from strategically placed stocks while providing broad coverage of products supported (versions, models) and regions served (as necessary). “Cost containment” is about managing the cost of logistics, obsolescence, and inventory while avoiding cost leakage while pricing refers to actively using price to manage demand and profitability and not only by basing it on cost and relative scarcity (e.g., proprietary v readily available parts). “Advanced services” relates to selling parts-related services, incl. concepts such as availability contracts or insurance, extended warranties, price guarantees (options), version management, reverse logistics, or other technical support.

The outcome of a comprehensive, well conducted benchmarking exercise has many aspects. Participants should not only be able to gauge their performance relative to peers, but they would gain deeper objective insights -based on data- and know where, why and how they should improve. They would be able to determine the cost-benefit (return on investment) of their potential improvement efforts and be able to prioritize. Furthermore, they should be able to anticipate developments (including disruptive ones) in their field and stay ahead of the curve.  And by staying engaged they would increase their longer-term competitiveness and ability to influence their markets and their industry. Benchmarking has helped thousands of companies strengthen their performance. It can do this for industrial services just as well.

Related Posts

Share the Post:

Service in Industry

Deep dive into the industrial service business.

Join our community to receive analysis, insight, news and more.
We will never share your data

Service Innovation for value-driven opportunities:

Facilitated by Professor Mairi McIntyre from the University of Warwick, the workshop explored service innovation processes that help us understand what makes our customers successful.

In particular, the Customer Value Iceberg principle goes beyond the typical Total Cost of Ownership view of the equipment world and explores how that equipment impacts the success of the business. It forces us to consider not only direct costs associated with usage of the equipment such but also indirect costs such as working capital and risks.

As an example, we looked at how MAN Truck UK used this method to develop services that went beyond the prevailing repairs, parts and maintenance to methods (through telematics and clever analytics) to monitor and improve the performance and  fuel consumption of their trucks. This approach helped grow their business by an order of magnitude over a number of years.

Mining Service Management Data to improve performance

We then took a deep dive into how Endress + Hauser have developed applications that can mine Service Management data to improve service performance:  

Thomas Fricke (Service Manager) and Enrico De Stasio (Head of Corporate Quality & Lean) facilitated a 3 hour discussion on their journey from idea to a real working application integrated into their Service processes. These were the key learning points that emerged:

Leadership

In 2018 the Senior leadership concluded that to stay competitive they needed to do far more to consolidate their global service data into a “data lake’ that could be used to improve their own service processes and bring more value to customers. As a company they had already seen the value of organising data as over the past 20 years for every new system they already had a “digital twin” which held electronically all the data for that system in an organised fashion. Initially, it was basic Bill of Material data, but has since grown in sophistication. So a good start but they needed to go further, and the leadership team committed resources to do this.

  • The first try: The project initially focused on collecting and organising data from its global service operations into a data lake.  This first phase required the development of infrastructure, processes and applications that could analyse service report data and turn it into actionable intelligence. The initial goal was to make internal processes more efficient, and so improve the customer experience. E+H looked for patterns in the reports of service engineers that could:
    • Be used to improve the performance of Service through processes and individuals
    • Be used by other groups such as engineering to improve and enhance product quality.
  • Outcome: Eventhough progress was made in many areas, nevertheless, even using advanced statistical methods, they could not extract or deliver the value they had hoped   for from the data. They needed to look at something different.
  • Leveraging AI technologies: The Endress+Hauser team knew they needed to look for patterns in large data sets. They had the knowledge that self-learning technologies that are frequently termed as AI, could potentially help solve this problem. They teamed up with a local university and created a project to develop a ‘Proof of Concept’. This helped the project gain traction as the potential of the application they had created started to emerge. It was not an easy journey and required “courage to trust the outcomes, see them fail and then learn from the process”. However after about 18 months they were able to integrate the application into their normal working processes where every day they scan the service reports from around the world in different languages to identify common patterns in product problems, or anomalies in the local service team activities. This information is fed back to the appropriate service teams for action. The application also acts as a central hub where anyone in the organisation can access and interrogate service report data to improve performance and develop new value propositions.
  • Improvement:  The project does not stop there. It is now embedded in the service operations and used as a basic tool for continuous improvement. In effect, this has shifted the whole organization to be more aware of the value of their data.

Utilizing AI in B2B services

Regarding AI, our task was to uncover some of the myths and benefits for service businesses and the first task was to agree on what we really mean by AI among the participants. It took time, but we discovered that there are really two interpretations which makes the term rather confusing. The first is a generic term used by visionaries and AI professionals to describe a world of intelligent machines and applications. Important at a social & macroeconomic level, but perhaps not so useful for business operations -at least at a practical level. The second is an umbrella term for a group of technologies that are good at finding patterns in large data sets (machine learning, neural networks, big data, computer vision), that can interface with human beings (Natural Language Processing) and that mimic human intelligence through being based on self-learning algorithms. Understanding this second definition and how these technologies can be used to overcome real business challenges is where the immediate value of AI sits for today’s businesses. It was also clear that the implication of integrating these technologies into business processes will require leaders to look at the change management challenges for their teams and customers.

To understand options for moving ahead at a practical level we first looked briefly at Husky through an interview with CIO Jean-Christophe Wiltz to CIOnet where we learned that i) real business needs should tailored drive technology implementation, and ii) that before getting to AI technologies, there is a need to build the appropriate infrastructure in terms of database and data collection, and, most importantly, the need to be prepared to continually adapt this infrastructure as the business needs change.

Add Your Heading Text Here

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Ut elit tellus, luctus nec ullamcorper mattis, pulvinar dapibus leo.