Augmented Reality Deployment in Service For Maximum Cost Impact

Profitable long term growth comes from having the right people in the right place at the right time.’ Technology although important, usually plays a secondary role.

What types of Augmented Reality deployment and use modes are the most appropriate for maximum cost impact? 

In a previous article on Service in the wake of COVID-19 and the associated recession, we discussed the need for drastically reducing the cost of uptime for customers. To do that both the cost of providing service and the need for service, either by predicting and preventing, i.e. minimizing, downtime, have to be reduced. Technology in its different iterations must play a significant role in this -and here we look at Augmented Reality (AR).

The conventional operating model of (field) service is field engineers visiting customer sites to diagnose, troubleshoot, and repair equipment problems. This requires building up know-how and expertise, capacity, and, importantly, logistics. All come with significant costs, both fixed and variable.

Augmented Reality (and related forms of digital visual environments) are changing this picture. In futuristic, though already partially existing, scenarios, engineers access digital representations of equipment (Digital Twins) and intervene on the corresponding physical equipment remotely -with or without the help of AI-based algorithms. This fundamentally alters the cost structures of Uptime (and possibly the competitive space) by impacting the cost of logistics, information processing, and the required capacity for the same levels of service. While such a scenario may require some years to become mainstream, some constituent elements can already be used.

The current key areas of AR impact include trainingremote guidance or assistanceinstructions and task management, as well as insights through data overlay.

By freeing the user from the constraints of two-dimensional text and images (screens, paper documents), AR has been shown to drastically improve the efficiency of learning (the brain learns better 3-dimensionally). People can be trained in new skills or in applying existing skills to new equipment in a fraction of the time it takes conventionally -and internalize the material far better through simulations and real-time case studies.

Remoting in an expert can save significant amounts of time and helps avoid costly mistakes -as does the availability of step by step instructions in the field of view -whether accessed manually or provided algorithmically. Experience and outcomes can be captured and reused. Collaborative problem solving can be supercharged without experts having to visit the site or access the physical equipment.

Finally, the availability of realtime contextual data on equipment and processes -in the field of view- can significantly improve problem-solving, decisions, and interventions.

Most companies implementing AR focus on the low hanging fruit of remoting in experts and providing step by step instructions, as the technology is more widely available, inexpensive and investments required for implementation (e.g. digitization of content, integration with IoT) quite low.

However, the cost impact of AR deployment will depend heavily on context and circumstances: the existing operating model and the nature of equipment served as well as managements’ ability to follow through with necessary changes to realize potential savings.

Examples:

A company makes complex made-to-order machinery distributes around the world. For Uptime it relies on a fairly small number of highly experienced engineers who spend most of their time at customer sites, possibly with a regional or type of equipment focus. The key limiting factor is clearly the high cost of expert capacity (quantity and the marginal cost of engineers, the time required to develop expertise and imbalances between supply and demand for expertise leading to prolonged downtime and decline in customer satisfaction). Such a business needs to break the expert capacity limitation.  Diagnostics and troubleshooting are probably far more capacity constraining than actual repairs. And problem types usually follow some form of Pareto rule where infrequent problems (10-20%) consume most (80-90%) problem-solving resources and account for the bulk of the costs.

On the other hand, a company making standardized equipment may have problems centered far more on repair processes and logistics. Failure modes are fewer and more predictable. The key cost drivers are Time- To-Fix and MTTR. Engineers need not have significant expertise. For cost impact, standard metrics need to improve and unit costs (cost of engineering-hour) need to reduce.

Given the different circumstances and context and hence the very different cost drivers in these cases -the tactical approach to AR implementation should be different.

In the latter case, the biggest impact can probably be achieved by providing field service personnel with step by step 3-D instructions with experts remoted in when necessary if a problem is an outlier. This can have significant and rapid quantitative capacity effects, but also impact the engineers’ profile mix reducing unit costs. In addition, AR-based training can be used to more rapidly equip novice engineers with the necessary skills. To succeed, such an implementation would require management to follow up with rapid organizational and workflow changes and capacity adjustments.

In the former case, the cost of Uptime can clearly be impacted by expanding productive utilization of existing expert capacity through remoting in experts on demand and utilizing collaborative problem-solving. Direct cost impact can be achieved by offloading repairs to customer technical personnel or outsourcing to local subcontractors supported by guidance/instructions. In addition, integration with IoT to access real-time data can further speed-up the intervention process, reduce errors, and improve optimization. Follow through management action, in this case, would be to change both the operating model for cost reductions and the business model for monetization.

Other, more nuanced, cases are of course possible.

Related Posts

Share the Post:

Service in Industry

Deep dive into the industrial service business.

Join our community to receive analysis, insight, news and more.
We will never share your data

Service in Industry

Deep dive into the industrial service business.

Join our community to receive analysis, insight, news and more.
We will never share your data

Service Innovation for value-driven opportunities:

Facilitated by Professor Mairi McIntyre from the University of Warwick, the workshop explored service innovation processes that help us understand what makes our customers successful.

In particular, the Customer Value Iceberg principle goes beyond the typical Total Cost of Ownership view of the equipment world and explores how that equipment impacts the success of the business. It forces us to consider not only direct costs associated with usage of the equipment such but also indirect costs such as working capital and risks.

As an example, we looked at how MAN Truck UK used this method to develop services that went beyond the prevailing repairs, parts and maintenance to methods (through telematics and clever analytics) to monitor and improve the performance and  fuel consumption of their trucks. This approach helped grow their business by an order of magnitude over a number of years.

Mining Service Management Data to improve performance

We then took a deep dive into how Endress + Hauser have developed applications that can mine Service Management data to improve service performance:  

Thomas Fricke (Service Manager) and Enrico De Stasio (Head of Corporate Quality & Lean) facilitated a 3 hour discussion on their journey from idea to a real working application integrated into their Service processes. These were the key learning points that emerged:

Leadership

In 2018 the Senior leadership concluded that to stay competitive they needed to do far more to consolidate their global service data into a “data lake’ that could be used to improve their own service processes and bring more value to customers. As a company they had already seen the value of organising data as over the past 20 years for every new system they already had a “digital twin” which held electronically all the data for that system in an organised fashion. Initially, it was basic Bill of Material data, but has since grown in sophistication. So a good start but they needed to go further, and the leadership team committed resources to do this.

  • The first try: The project initially focused on collecting and organising data from its global service operations into a data lake.  This first phase required the development of infrastructure, processes and applications that could analyse service report data and turn it into actionable intelligence. The initial goal was to make internal processes more efficient, and so improve the customer experience. E+H looked for patterns in the reports of service engineers that could:
    • Be used to improve the performance of Service through processes and individuals
    • Be used by other groups such as engineering to improve and enhance product quality.
  • Outcome: Eventhough progress was made in many areas, nevertheless, even using advanced statistical methods, they could not extract or deliver the value they had hoped   for from the data. They needed to look at something different.
  • Leveraging AI technologies: The Endress+Hauser team knew they needed to look for patterns in large data sets. They had the knowledge that self-learning technologies that are frequently termed as AI, could potentially help solve this problem. They teamed up with a local university and created a project to develop a ‘Proof of Concept’. This helped the project gain traction as the potential of the application they had created started to emerge. It was not an easy journey and required “courage to trust the outcomes, see them fail and then learn from the process”. However after about 18 months they were able to integrate the application into their normal working processes where every day they scan the service reports from around the world in different languages to identify common patterns in product problems, or anomalies in the local service team activities. This information is fed back to the appropriate service teams for action. The application also acts as a central hub where anyone in the organisation can access and interrogate service report data to improve performance and develop new value propositions.
  • Improvement:  The project does not stop there. It is now embedded in the service operations and used as a basic tool for continuous improvement. In effect, this has shifted the whole organization to be more aware of the value of their data.

Utilizing AI in B2B services

Regarding AI, our task was to uncover some of the myths and benefits for service businesses and the first task was to agree on what we really mean by AI among the participants. It took time, but we discovered that there are really two interpretations which makes the term rather confusing. The first is a generic term used by visionaries and AI professionals to describe a world of intelligent machines and applications. Important at a social & macroeconomic level, but perhaps not so useful for business operations -at least at a practical level. The second is an umbrella term for a group of technologies that are good at finding patterns in large data sets (machine learning, neural networks, big data, computer vision), that can interface with human beings (Natural Language Processing) and that mimic human intelligence through being based on self-learning algorithms. Understanding this second definition and how these technologies can be used to overcome real business challenges is where the immediate value of AI sits for today’s businesses. It was also clear that the implication of integrating these technologies into business processes will require leaders to look at the change management challenges for their teams and customers.

To understand options for moving ahead at a practical level we first looked briefly at Husky through an interview with CIO Jean-Christophe Wiltz to CIOnet where we learned that i) real business needs should tailored drive technology implementation, and ii) that before getting to AI technologies, there is a need to build the appropriate infrastructure in terms of database and data collection, and, most importantly, the need to be prepared to continually adapt this infrastructure as the business needs change.

Add Your Heading Text Here

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Ut elit tellus, luctus nec ullamcorper mattis, pulvinar dapibus leo.