Internet of Things: Strategic Options for OEMs in Industrial Services

In 2015 we wrote an article suggesting that for service businesses the Internet of Things (IoT) facilitates servitization: “the technologies can be understood as enablers of business model shifts where OEMs (and most product manufacturers) will move away from “transactions” -selling a machine and then servicing it for a fee- to “relationships” –selling attributes and performance and tying compensation to outcomes via long term contracts  – a process that we call ‘servitization’”. And we thought customers would be interested: “the new technology makes possible the achievement of required performance standards for less cost (in the sense of deadweight costs: e.g. assets will break down less) and therefore should be in high demand.”  We advised that OEMs -in terms of investing in the IoT- would need, at least in the beginning, to be guided primarily by the potential downside of being left out, disconnected from the data [generated by their products] than a concrete upside from new services, the timing and value of which was still uncertain. The best way forward, we said, “is to invest in analytics, computing, and data acquisition capability and to develop end-to-end Apps (from prognostics to remote intervention) for their installed base” (despite numerous hurdles, incl. data regulatory hurdles, that would still need to be overcome). While this would require non-trivial upfront investment, the risks of losing touch with the installed base are reduced and an opportunity opens up in the new market of industrial and asset management Apps. “One certainty is that IT is moving from running supporting processes very squarely into the heart of industrial businesses.”

Most of this is still sound advice for many companies that have not been at the forefront of development. However, as some of the technologies have since matured while others have emerged (e.g. new machine learning technologies) and more devices and assets have been connected and strategies tried out by industrial companies, it is time to do another review of the situation and the strategic options. Because it turns out that the impact of the IoT and digitization in a broader sense is stronger and more complex on the market and the competitive environment than initially thought. We have since seen disruption in many industries and the (field) service industry will not be an exception.

As a first step, it’s helpful to define what we call the “IoT” not just as connected devices, products or assets, but as the totality of hardware and necessary software to handle the connectivity and manage the data, the algorithms necessary to analyze and use the data (e.g. to make predictions) and the related business applications whether for interventions or decision-making more generally -in the spirit of the Industrial Internet Consortium.

Furthermore, while some, mainly statistical, methodologies to analyze data have been around for a long time (e.g. Regression Analysis, Bayesian Inference, Monte Carlo methods, Markov Chains, etc.), it is the progress made in various areas of machine learning that has enabled a big leap forward in the ability to predict outcomes of any type (e.g. lately in Deep Learning Neural Networks, Reinforcement Learning, Transfer Learning, etc.). And while large amounts of data are still required for many applications, progress in Deep Learning and Reinforcement Learning methodologies may reduce the amount of data required. This is important because while some existing industrial digital applications can produce useful outcomes based on (limited) data provided by automation systems (e.g. DCS, PLC) and historians to deliver (some) predictions, so-called “killer apps” (e.g. predictive maintenance diagnostics or prognostics) require large amounts of data, sometimes so large they don’t really exist in any single company. In general, the amount of data required for applications depends on the type of application/prediction, the robustness and accuracy required for it to be useful and the technology/methodology used. In other words, for some applications data from automation systems and historians may be sufficient for others it may not. Reducing amount of data required will therefore do much to enable improvement and expansion of applications in new areas.

In this example, GE uses a vast data trove through increased sensing to make predictions and provide services around its new (2018) aircraft engines, while TaKaDu, an Australian digital services company, uses existing data from Scada systems to predict and detect leakages in water networks -with good enough accuracy.

Turning now to structuring new business models, it’s important to decide which “value driver” to focus on. There are essentially two:

The sales / revenue value driver, i.e. what, how, how much and at what price is sold to customers (i.e. the offerings), and

The operations / delivery system value driver, i.e. how the offering is developed, produced and delivered in terms of cost and efficiency and related parameters such as speed or quality.

Interestingly, in many cases, service organizations (particularly in industries that develop and sell complex engineered customized systems), the focus is usually on increasing top-line growth, i.e. tinkering with the sales/revenue value driver. This may be influenced by heady consultant reports or white papers on driving growth through new technology. The reality is, of course, that it is difficult to grow in competitive markets -whether through technology or other means. It is either necessary to grow market share through a better or new product or service (a non-trivial undertaking usually requiring heavy investment and one that can be usually fairly easily copied if the company is unable to moat its advantage) or the market itself must be growing. But there is no reason to think that the market for services into particular (customer) industries will grow unless, again, these industries are growing. And it is often futile to believe customers will shift part of their revenue dollars to service providers, even if these promise higher productivity -simply because higher productivity achieved through a supplier cannot be moated (i.e. protected and sustained). The normal outcome is that customers negotiate down the price of the service to levels where revenue stays flat and this phenomenon is even stronger when a service (or product) is based on information technology[1] as we shall see.

Therefore, for most service organizations leveraging the IoT to generate own cost and productivity gains through automating their value chain, delivery system and decision making would be a wise strategic (offensive) choice as it would ultimately improve competitiveness. Nevertheless, in this article we’ll focus on the topline aspects of business models and the issues surrounding them and use a later article to focus on the impact of IoT and digitization (incl. analytics and AI) on service operations

In general, industrial organizations have approached the topline possibilities of the IoT in one or more of four different yet interrelated ways:

  • Add connectivity, sensing capacity and/or intelligence to products to improve their inherent attractiveness and competitiveness. However, to-date, there is no evidence that OEMs can charge more for connected products, but are rather absorbing the costs, if any, of the new features themselves -at least for new products (though in the process of digitizing factories many customers are upgrading legacy products/assets at own cost). Even so, this is a necessary basic defensive strategy, as non-connectable, non-smart products are already at a disadvantage and will be more so in the future.
  • Develop and wrap services (usually in the form of applications -Apps-) around the product to generate new revenue streams. There are many different kinds of Apps ranging from passive and simple (such as various dashboards that aggregate and present data or information in comprehensible actionable ways) to complex (e.g. anomaly detection, diagnostics, prognostics) to active and complex (from advice on how to improve operations, efficiency or quality or breakdown avoidance to automatic interventions to prevent or fix a problem).
  • Develop and operate industrial IoT platforms and ecosystems. These occupy a ground encompassing an operating system for connected products, data flow and management, both on the edge and centrally in the cloud, analytics engines and business applications with connections to enterprise software at the top end. Examples include GE PredixSiemens MindsphereBosch IoT or Hitachi Lumada. Just like Apps they compete with offerings from pure software providers, whether industrial software specialists like PTC (with ThingWorx), the software majors like Microsoft, Amazon IBM or SAP who entered this market through expansion of cloud computing offerings and analytics engines (and actually are both the data/computing backbone, i.e. service suppliers to and competitors to the industrial platforms) and new entrants such as Eurotech or Litmus Automation. Becoming a platform provider requires very large investments, significant algorithmic knowhow and access to a large customer base. Therefore, this approach is not open to most OEMs. However, platforms are rapidly becoming the standard infrastructure for delivery of Apps and OEMs need to live with them and understand them well. The platform market is growing quite rapidly (between 10% and 20% p.a. with an increasing trend) driven both by end-user customers (process industries, mining, utilities, oil and gas) mainly for asset management applications at present, as well as some (large) machinery OEMs. At some point -driven by “network effects” and the “winner-takes-all” phenomenon which are common in platform markets, the number of viable platform providers will reduce and a few will gain oversized market shares. However this time is not yet.
  • Create new integrated service/product offerings (Product-Service-Bundles, Product-as-a-Service) taking advantage of the ability of new technology to de-risk these offerings through continuous monitoring, anomaly detection and (semi)-automatic, almost real-time, interventions. There are examples from many industries, including automotive, power generation (renewables), oil and gas, mining, civil aviation, medical equipment, etc. where this is happening at an increasing pace. However, we will not look into this here. It has been discussed in other articles and we will go deeper on the technology side of this in a future article. 

So going back to the digital service Apps which are the main pathway for most OEMs to be active in this market: Two key questions need to be addressed: i) how to price them and ii) how they will be delivered.

One thing that needs to be understood about Apps is that OEMs have no exclusive or protected capability to develop them. Anyone with the knowhow and access to the generated data can develop Apps, including competitors and, most importantly, third party (usually digitally native) developers/providers[2]. If an App is addressing an interesting and substantial market it will therefore almost certainly face stiff competition. Predictive Maintenance (PdM) is a case in point where over the past few years dozens if not hundreds of App developers have emerged (both start-ups with expertise in machine learning and analytics as well as established companies). And usually, third-party providers focus in some form on the totality of industrial markets and not on a particular OEM-linked installed base or product type. This is very important as these Apps have high development costs (fixed costs) while the cost of adding customers (once the App has been developed) is low (variable cost). Hence to amortize these costs a large number of customers (and products/assets on which to apply the App) is required. And those large numbers are also required to generate sufficiently large amounts of data to produce accurate and robust outcomes (predictions) to be appealing to customers. For example, TaKaDu may use limited amounts of data from each customer, but the total amount of data from all customers ultimately helps with the necessary accuracy and robustness of the predictions. Both the competitive intensity and the need to attract large numbers of customers put downward pressure on the (initial) price of Apps and this is a problem for OEMs if their focus is only on their own customers and their own products or installed base -as excluding very large companies it would be difficult to create a sufficiently large customer base.

There are further problems associated with pricing and revenue generation. For example, some Apps may cannibalize existing revenue streams, particularly if they are meant to reduce or replace human-based revenue generation. PdM is again a case in point. If it is done correctly, over time, it will reduce the demand for repairs, even spare parts, on which most OEMs rely on for the bulk of their service profits. In this case, some envisage payment methods changing with customers paying for maximization of uptime rather than minimization of downtime, however that is usually difficult to achieve in practice unless compensation is linked to output or throughput in some Product-as-a-Service arrangement. But there is also the problem of pricing method and payment culture. Most service providers use “cost-plus” methods to price services (as value-based or competitive pricing is often impractical or difficult to achieve in everyday situations). Conventional PdM is mostly based on some form of condition monitoring (e.g. vibration analysis for rotating machinery) and analysis by (expensive) human experts. This service is then be billed to customers based on the high variable cost of the expert plus some margin. Assuming now the human is replaced by a computing system with a negligible variable cost, it is highly unlikely that customers will accept to pay the same price -even if the quality (outcome) of the service is the same or even better. This is not irrational behavior by customers but a result of cost structures and the fact that the introduction of information technology reduces the amount of resources needed to achieve the outcome or accomplish the task. As the unit costs of PdM decline -almost to zero- the price will follow accordingly. This kind of “natural” phenomenon in computation-based tasks is called “dematerialization” and it can be seen in many well-publicized trends: For example, the cost of sequencing a whole genome fell from 100 million US$ in 2001 to a few hundred $ today. Or, the cost of 1 million transistors in 1992 was $222 and is about 3 cents today with companies like Intel and IBM squeezing over 300 million transistors on a 1 cm2 chip. But maybe the best illustration comes from the iPhone, which already in 2011 included technology that was worth over $900,000 in the year that it was introduced.

So, the critical success factor for digital Apps is scale achieved not only through more customers but also by expanding the scope. In PdM this would mean adding production equipment which would have been out of scope in conventional condition monitoring due to its high cost. It is not clear whether OEMs of machinery can have the necessary focus to be able to compete in this way -that is being able to generate revenue and profit from scale rather than individual customers and narrow markets.

Then there is the issue of App delivery. While today 1:1 relationships with customers for monitoring, data transfer, and provision of digital services is possible, as customers digitize factories and supply chains through platforms (including in-house developments), Apps will need to be delivered through those platforms -just as consumer Apps are delivered through Apple’s iOS App Store or Google’s Android. This is not trivial and not only because standardization and interoperability in the IIoT are still some way off, though the Industrial Internet Consortium is working towards this goal. OEMs will need to partner with platform providers to reach customers and sell their Apps, as, for example, Pitney Bowes, a postage meter and mailing equipment manufacturer, and Joy Global, a mining equipment supplier, now merged with Komatsu, have already done. Over time, there will be a need to make Apps compatible with all the top platforms.

So, in conclusion, there seem to be four options for OEMs to participate in the digital service App market:

The first option is to develop basic service Apps to support the product, absorb the cost and offer them for free or at nominal cost to customers. The purpose here is to support the main (conventional) product/service offering. While this may be good marketing practice in the short term and may serve as a defensive strategy, it does not protect from disruption or reduced competitiveness in the longer term. There is not much evidence that consumer companies that have employed such strategies have succeeded in not losing market share, though it is still too early to pass final judgments.

The second option is to focus on niche problems that can be solved through Apps, where significant domain (process) knowledge and expertise are necessary providing an advantage to the OEM. Competitive intensity will be presumably less in these areas and prices higher. The impact on the top line will be however low and it is not clear whether this can be sustainable as customers may well learn to build such Apps on their own. 

The third option is for OEMs to fully compete in the market with Apps dealing with mainstream, large scale problems based on third party platforms or with “platform-on-a-platform” strategies, i.e. building smaller platforms on larger third-party platforms, perhaps with a focus on their own type of product and installed base with offerings that can add value, including solutions such as benchmarking, or AI-based problem-solving. Examples may include Netzsch, a German manufacturer of grinding machines, on the basis of SAP or Vestas, a wind turbine maker, on the basis of IBM’s platform. These strategies require significant changes in approach (the need to go beyond the own product) to gain scale and may work only for some particular types of customers and not for others  – this will depend mainly on the nature of the business, the customer’s relative size and IIoT approach as well as the OEM’s market power and wallet share at a factory/industry level. They also need significant investment, commitment and seriousness in becoming a digital services provider. To achieve significant and sustained topline growth digital services cannot be subordinated to products.

Finally, as already mentioned, there is the option of using technology to enable a transition to Product-as-a-Service offerings, which a number of companies are pursuing. Again this requires not only significant initial investment and commitment, but also a change in a company’s risk management approach and culture and, again, depends strongly on the nature of the business. It will be the subject of a separate article.

Compared to 2015, OEMs should be in a much better position to make judgements and choices on how to leverage the IIoT and digitization for topline growth. This, however, doesn’t necessarily make things easier. The choices are hard, and the commitment and investment needed substantial. What is clear is that without significant involvement in the digital service market with the right strategy -Apps, Platforms or PaaS- the impact will be felt not only in service but in the entire business as products get commoditized, prices come under pressure and relationships with customers fall victim to new ones based on data.

[1] In 2003, Nicholas Carr wrote a seminal article “IT doesn’t matter” in the Harvard Business Review explaining why information technology cannot convey competitive advantage: Everybody has access to the same technologies.

[2] We will neglect here the regulatory issue of data and data security, noting only that data belong to end user customers and access to data can sometimes be a significant hurdle to overcome for App providers. Many end-user customers develop their own Apps in order not to share data with providers.


Related Posts

Share the Post:

Service in Industry

Deep dive into the industrial service business.

Join our community to receive analysis, insight, news and more.
We will never share your data

Service in Industry

Deep dive into the industrial service business.

Join our community to receive analysis, insight, news and more.
We will never share your data

Service Innovation for value-driven opportunities:

Facilitated by Professor Mairi McIntyre from the University of Warwick, the workshop explored service innovation processes that help us understand what makes our customers successful.

In particular, the Customer Value Iceberg principle goes beyond the typical Total Cost of Ownership view of the equipment world and explores how that equipment impacts the success of the business. It forces us to consider not only direct costs associated with usage of the equipment such but also indirect costs such as working capital and risks.

As an example, we looked at how MAN Truck UK used this method to develop services that went beyond the prevailing repairs, parts and maintenance to methods (through telematics and clever analytics) to monitor and improve the performance and  fuel consumption of their trucks. This approach helped grow their business by an order of magnitude over a number of years.

Mining Service Management Data to improve performance

We then took a deep dive into how Endress + Hauser have developed applications that can mine Service Management data to improve service performance:  

Thomas Fricke (Service Manager) and Enrico De Stasio (Head of Corporate Quality & Lean) facilitated a 3 hour discussion on their journey from idea to a real working application integrated into their Service processes. These were the key learning points that emerged:


In 2018 the Senior leadership concluded that to stay competitive they needed to do far more to consolidate their global service data into a “data lake’ that could be used to improve their own service processes and bring more value to customers. As a company they had already seen the value of organising data as over the past 20 years for every new system they already had a “digital twin” which held electronically all the data for that system in an organised fashion. Initially, it was basic Bill of Material data, but has since grown in sophistication. So a good start but they needed to go further, and the leadership team committed resources to do this.

  • The first try: The project initially focused on collecting and organising data from its global service operations into a data lake.  This first phase required the development of infrastructure, processes and applications that could analyse service report data and turn it into actionable intelligence. The initial goal was to make internal processes more efficient, and so improve the customer experience. E+H looked for patterns in the reports of service engineers that could:
    • Be used to improve the performance of Service through processes and individuals
    • Be used by other groups such as engineering to improve and enhance product quality.
  • Outcome: Eventhough progress was made in many areas, nevertheless, even using advanced statistical methods, they could not extract or deliver the value they had hoped   for from the data. They needed to look at something different.
  • Leveraging AI technologies: The Endress+Hauser team knew they needed to look for patterns in large data sets. They had the knowledge that self-learning technologies that are frequently termed as AI, could potentially help solve this problem. They teamed up with a local university and created a project to develop a ‘Proof of Concept’. This helped the project gain traction as the potential of the application they had created started to emerge. It was not an easy journey and required “courage to trust the outcomes, see them fail and then learn from the process”. However after about 18 months they were able to integrate the application into their normal working processes where every day they scan the service reports from around the world in different languages to identify common patterns in product problems, or anomalies in the local service team activities. This information is fed back to the appropriate service teams for action. The application also acts as a central hub where anyone in the organisation can access and interrogate service report data to improve performance and develop new value propositions.
  • Improvement:  The project does not stop there. It is now embedded in the service operations and used as a basic tool for continuous improvement. In effect, this has shifted the whole organization to be more aware of the value of their data.

Utilizing AI in B2B services

Regarding AI, our task was to uncover some of the myths and benefits for service businesses and the first task was to agree on what we really mean by AI among the participants. It took time, but we discovered that there are really two interpretations which makes the term rather confusing. The first is a generic term used by visionaries and AI professionals to describe a world of intelligent machines and applications. Important at a social & macroeconomic level, but perhaps not so useful for business operations -at least at a practical level. The second is an umbrella term for a group of technologies that are good at finding patterns in large data sets (machine learning, neural networks, big data, computer vision), that can interface with human beings (Natural Language Processing) and that mimic human intelligence through being based on self-learning algorithms. Understanding this second definition and how these technologies can be used to overcome real business challenges is where the immediate value of AI sits for today’s businesses. It was also clear that the implication of integrating these technologies into business processes will require leaders to look at the change management challenges for their teams and customers.

To understand options for moving ahead at a practical level we first looked briefly at Husky through an interview with CIO Jean-Christophe Wiltz to CIOnet where we learned that i) real business needs should tailored drive technology implementation, and ii) that before getting to AI technologies, there is a need to build the appropriate infrastructure in terms of database and data collection, and, most importantly, the need to be prepared to continually adapt this infrastructure as the business needs change.

Add Your Heading Text Here

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Ut elit tellus, luctus nec ullamcorper mattis, pulvinar dapibus leo.