AWS Machine Learning Blog

How E.ON saves £10 million annually with AI diagnostics for smart meters powered by HAQM Textract

E.ON—headquartered in Essen, Germany—is one of Europe’s largest energy companies, with over 72,000 employees serving more than 50 million customers across 15 countries. As a leading provider of energy networks and customer solutions, E.ON focuses on accelerating the energy transition across Europe. A key part of this mission involves the Smart Energy Solutions division, which manages over 5 million smart meters in the UK alone. These devices help millions of customers track their energy consumption in near real time, receive accurate bills without manual readings, reduce their carbon footprints through more efficient energy management, and access flexible tariffs aligned with their usage.

Historically, diagnosing errors on smart meters required an on-site visit—an approach that was both time-consuming and logistically challenging. To address this challenge, E.ON partnered with AWS to develop a remote diagnostic solution powered by HAQM Textract, a machine learning (ML) service that automatically extracts printed text, handwriting, and structure from scanned documents and images. Instead of dispatching engineers, the consumer captures a 7-second video of their smart meters, which is automatically uploaded to AWS through the E.ON application for remote analysis. In real-world testing, it delivers an impressive 84% accuracy. Beyond cost savings, this ML-powered solution enhances consistency in diagnostics and can detect malfunctioning meters before issues escalate.

By transforming on-site inspections into quick-turnaround video analysis, E.ON aims to reduce site visits, accelerate repair times, make sure assets achieve their full lifecycle expectation, and cut annual costs by £10 million. This solution also helps E.ON maintain its 95% smart meter connectivity target, further demonstrating the company’s commitment to customer satisfaction and operational excellence.

In this post, we dive into how this solution works and the impact it’s making.

The challenge: Smart meter diagnostics at scale

Smart meters are designed to provide near real-time billing data and support better energy management. But when something goes wrong, such as a Wide Area Network (WAN) connectivity error, resolving it has traditionally required dispatching a field technician. With 135,000 on-site appointments annually and costs exceeding £20 million, this approach is neither scalable nor sustainable.

The process is also inconvenient for customers, who often need to take time off work or rearrange their schedules. Even then, resolution isn’t guaranteed. Engineers diagnose faults by visually interpreting a set of LED indicators on the Communications Hub, the device that sits directly on top of the smart meter. These LEDs, SW, WAN, HAN, MESH, and GAS, blink at different frequencies (Off, Low, Medium, High), and accurate diagnosis requires matching these blink patterns to a technical manual. With no standardized digital output and thousands of possible combinations, the risk of human error is high, and without a confirmed fault in advance, engineers might arrive without the tools needed to resolve the issue.

The following visuals make these differences clear. The first is an animation that mimics how the four states blink in real time, with each pulse lasting 0.1 seconds.

Animation showing the four LED pulse states (Off, Low, Medium, High) and the wait time between each 0.1-second flash.

Animation showing the four LED pulse states (Off, Low, Medium, High) and the wait time between each 0.1-second flash.

The following diagram presents a simplified 7-second timeline for each state, showing exactly when pulses occur and how they differ in count and spacing.

Timeline visualization of LED pulse patterns over 7 seconds.

Timeline visualization of LED pulse patterns over 7 seconds.

E.ON wanted to change this. They set out to alleviate unnecessary visits, reduce diagnostic errors, and improve customer experience. Partnering with AWS, they developed a more automated, scalable, and cost-effective way to detect smart meter faults, without needing to send an engineer on-site.

From manual to automated diagnostics

In partnership with AWS, E.ON developed a solution where customers record and upload short, 7-second videos of their smart meter. These videos are analyzed by a diagnostic tool, which returns the error and a natural language explanation of the issue directly to the customer’s smartphone. If an engineer visit is necessary, the technician arrives equipped with the right tools, having already received an accurate diagnosis.

The following image shows a typical Communications Hub, mounted above the smart meter. The labeled indicators—SW, WAN, HAN, MESH, and GAS—highlight the LEDs used in diagnostics, illustrating how the system identifies and isolates each region for analysis.

A typical Communications Hub, with LED indicators labeled SW, WAN, MESH, HAN, and GAS.

A typical Communications Hub, with LED indicators labeled SW, WAN, MESH, HAN, and GAS.

Solution overview

The diagnostic tool follows three main steps, as outlined in the following data flow diagram:

  1. Upon receiving a 7-second video, the solution breaks it into individual frames. A Signal Intensity metric flags frames where an LED is likely active, drastically reducing the total number of frames requiring deeper analysis.
  2. Next, the tool uses HAQM Textract to find text labels (SW, WAN, MESH, HAN, GAS). These labels, serving as landmarks, guide the system to the corresponding LED regions, where custom signal- and brightness-based heuristics determines whether each LED is on or off.
  3. Finally, the tool counts pulses for each LED over 7 seconds. This pulse count maps directly to Off, Low, Medium, or High frequencies, which in turn align with error codes from the meter’s reference manual. The error code can either be returned directly as shown in the conceptual view or translated into a natural language explanation using a dictionary lookup created from the meter’s reference manual.
A conceptual view of the remote diagnostic pipeline, centered around the use of Textract to extract insights from video input and drive error detection.

A conceptual view of the remote diagnostic pipeline, centered around the use of Textract to extract insights from video input and drive error detection.

A 7-second clip is essential to reduce ambiguity around LED pulse frequency. For instance, the Low frequency might flash once or twice in a five-second window, which could be mistaken for Off. By extending to 7 seconds, each frequency (Off, Low, Medium, or High) becomes unambiguous:

  • Off: 0 pulses
  • Low: 1–2 pulses
  • Medium: 3–4 pulses
  • High: 11–12 pulses

Because there’s no overlap among these pulse counts, the system can now accurately classify each LED’s frequency.

In the following sections, we discuss the three key steps of the solution workflow in more detail.

Step 1: Identify key frames

A modern smartphone typically captures 30 frames per second, resulting in 210 frames over a 7-second video. As seen in the earlier image, many of these frames appear as though the LEDs are off, either because the LEDs are inactive or between pulses, highlighting the need for key frame detection. In practice, only a small subset of the 210 frames will contain a visible lit LED, making it unnecessarily expensive to analyze every frame.

To address this, we introduced a Signal Intensity metric. This simple heuristic examines color channels and assigns each frame a likelihood score of containing an active LED. Frames with a score below a certain threshold are discarded, because they’re unlikely to contain active LEDs. Although the metric might generate a few false positives, it effectively trims down the volume of frames for further processing. Testing in the field conditions has shown robust performance across various lighting scenarios and angles.

Step 2: Inspect light status

With key frames identified, the system next determines which LEDs are active. It uses HAQM Textract to treat the meter’s panel like a document. HAQM Textract identifies all visible text in the frame, and the diagnostic system then parses this output to isolate only the relevant labels: “SW,” “WAN,” “MESH,” “HAN,” and “GAS,” filtering out unrelated text.

The following image shows a key frame processed by HAQM Textract. The bounding boxes show detected text; LED labels appear in red after text matching.

A key frame processed by HAQM Textract. The bounding boxes show detected text; LED labels appear in red after text matching.

A key frame processed by HAQM Textract. The bounding boxes show detected text; LED labels appear in red after text matching.

Because each Communications Hub follows standard dimensions, the LED for each label is consistently located just above it. Using the bounding box coordinates from HAQM Textract as our landmark, the system calculates an “upward” direction for the meter and places a new bounding region above each label, pinpointing the pixels corresponding to each LED. The resulting key frame highlights exactly where to look for LED activity.

To illustrate this, the following image of a key frame shows how the system maps each detected label (“SW,” “WAN,” “MESH,” “HAN,” “GAS”) to its corresponding LED region. Each region is automatically defined using the HAQM Textract output and geometric rules, allowing the system to isolate just the areas that matter for diagnosis.

A key frame showing the exact LED regions for “SW,” “WAN,” “MESH,” “HAN,” and “GAS.”

A key frame showing the exact LED regions for “SW,” “WAN,” “MESH,” “HAN,” and “GAS.”

With the LED regions now precisely defined, the tool evaluates whether each one is on or off. Because E.ON didn’t have a labeled dataset large enough to train a supervised ML model, we opted for a heuristic approach. We combined the Signal Intensity metric from Step 1 with a brightness threshold to determine LED status. By using relative rather than absolute thresholds, the method remains robust across different lighting conditions and angles, even if an LED’s glow reflects off neighboring surfaces.The end result is a simple on/off status for each LED in every key frame, feeding into the final error classification in Step 3.

Step 3: Aggregate results to determine the error

Now that each key frame has an on/off status for each LED, the final step is to determine how many times each light pulses during the 7-second clip. This pulse count reveals which frequency (Off, Low, Medium, or High) each LED is blinking at, allowing the solution to identify the appropriate error code from the Communications Hub’s reference manual, just like a field engineer would, but in a fully automated way.

To calculate the number of pulses, the system first groups consecutive “on” frames. Because one pulse of light typically lasts 0.1 seconds, or about 2–3 frames, a continuous block of “on” frames represents a single pulse. After grouping these blocks, the total number of pulses for each LED can be counted. Thanks to the 7-second recording window, the mapping from pulse count to frequency is unambiguous.

After each LED’s frequency is determined, the system simply references the meter’s manual to find the corresponding error. This final diagnostic result is then relayed back to the customer.

The following demo video below shows this process in action, with a user uploading a 7-second clip of their meter. In just 5.77 seconds, the application detects a WAN error, explains how it arrived at that conclusion, and outlines the steps an engineer would take to address the issue.

Conclusion

E.ON’s story highlights how a creative application of HAQM Textract, combined with custom image analysis and pulse counting, can solve a real-world challenge at scale. By diagnosing smart meter errors through brief smartphone videos, E.ON aims to lower costs, improve customer satisfaction, and enhance overall energy service reliability.

Although the system is still being field tested, initial results are encouraging: approximately 350 cases per week (18,200 annually) can now be diagnosed remotely, with an estimated £10 million in projected annual savings. Real-world accuracy stands at 84%, without extensive tuning, while controlled environments have shown a 100% success rate. Notably, the tool has even caught errors that field engineers initially missed, pointing to opportunities for refined training and proactive fault detection.

Looking ahead, E.ON plans to expand this approach to other devices and integrate advanced computer vision techniques to further boost accuracy. If you’re interested in exploring a similar solution, consider the following next steps:

  • Explore the HAQM Textract documentation to learn how you can streamline text extraction for your own use cases
  • Alternatively, consider HAQM Bedrock Document Automation for a generative AI-powered alternative to extract insights from multimodal content in audio, documents, images, and video
  • Browse the HAQM Machine Learning Blog to discover innovative ways customers use AWS ML services to drive efficiency and reduce costs
  • Contact your AWS Account Manager to discuss your specific needs to design a proof of concept or production-ready solution

By combining domain expertise with AWS services, E.ON demonstrates how an AI-driven strategy can transform operational efficiency, even in early stages. If you’re considering a similar path, these resources can help you unlock the power of AWS AI and ML to meet your unique business goals.


About the Authors

Sam Charlton is a Product Manager at E.ON who looks for innovative ways to use existing technology against entrenched issues often ignored. Starting in the contact center, he has worked the breadth and depth of E.ON, ensuring a holistic stance for his business’s needs.

Tanrajbir Takher is a Data Scientist at the AWS Generative AI Innovation Center, where he works with enterprise customers to implement high-impact generative AI solutions. Prior to AWS, he led research for new products at a computer vision unicorn and founded an early generative AI startup.

Satyam Saxena is an Applied Science Manager at the AWS Generative AI Innovation Center. He leads generative AI customer engagements, driving innovative ML/AI initiatives from ideation to production, with over a decade of experience in machine learning and data science. His research interests include deep learning, computer vision, NLP, recommender systems, and generative AI.

Tom Chester is an AI Strategist at the AWS Generative AI Innovation Center, working directly with AWS customers to understand the business problems they are trying to solve with generative AI and helping them scope and prioritize use cases. Tom has over a decade of experience in data and AI strategy and data science consulting.

Amit Dhingra is a GenAI/ML Sr. Sales Specialist in the UK. He works as a trusted advisor to customers by providing guidance on how they can unlock new value streams, solve key business problems, and deliver results for their customers using AWS generative AI and ML services.