Customer Stories / Consumer Goods

2020
HAQM Logo

HAQM Reduces Infrastructure Costs on Visual Bin Inspection by a Projected 40% Using HAQM SageMaker

HAQM Fulfillment Technologies migrated from a legacy custom solution for identifying misplaced inventory to HAQM SageMaker, reducing AWS infrastructure costs by a projected 40 percent per month and simplifying its architecture.

12 weeks

to develop new solution

2 weeks

to launch new models, compared to 3-6 months

Millions

of images processed daily

40%

project savings per month on AWS spend by removing unnecessary infrastructure services

50%

time-to-predict latency cut using GPUs instead of CPUs

Overview

HAQM Fulfillment Technologies (AFT) must monitor millions of global shipments annually to deliver on HAQM’s promise that an item will be readily available to a customer and will arrive on time. To operate at HAQM’s scale, AFT’s internal visual bin inspection (VBI) team had a proprietary legacy computer vision–based software solution that scanned millions of images across its network of FCs to identify misplaced inventory. Deploying, testing, training, and maintaining the solution were expensive and time consuming. Building new capabilities to overcome those limitations would take several months of development effort.

The VBI team turned to an HAQM Web Services (AWS) solution. The team migrated to HAQM SageMaker, a fully managed service that provides developers and data scientists with the ability to build, train, and deploy machine learning (ML) models quickly. This helped the VBI team achieve a simplified architecture by reducing the technical costs with less infrastructure, enabling them to achieve faster and easier ML model deployments and relieve their development team of the responsibility of system maintenance. The VBI team can now quickly scale with each new FC to perform inspections to support HAQM’s customer promise.

Large Inventory. Warehouse Goods Stock for Logistic shipping banner background.

Opportunity | Solving Past System Deficiencies Using HAQM SageMaker

The VBI team builds autonomous counting systems that use computer vision, ML, and image-processing algorithms to analyze images across its network of FCs worldwide. Initially, the VBI team ran its own solution on HAQM Elastic Compute Cloud (HAQM EC2) C4 Instances. This solution did not support piloting new models—that is, it did not enable the team’s new models to handle requests alongside the old ML model nor did it allow the team to test it using real data and without risking service disruptions. Consequently, the VBI team had to develop ML models offline and validate and test them manually, which often took 3–6 months. The system also did not provide the capability to divert volume to new model launches. And inference—the process of making predictions using a trained model—was limited to CPU-based single input predictions, so the VBI team couldn’t reduce costs with graphics processing units (GPUs) or batch execution.

The VBI team was drawn to HAQM SageMaker because it solved the inefficiencies of the team's legacy software solution while reducing the amount of software and infrastructure that needed to be maintained. The fully managed service enabled the software development team to focus on core competencies instead of system management.

kr_quotemark

We will cut infrastructure costs by 40 percent with batch inference—just by using the features that came with HAQM SageMaker.”

Lalat Nayak
Senior Engineer, HAQM Fulfilment Technologies

Solution | Cutting Workload Using HAQM SageMaker

The VBI team developed the new solution on HAQM SageMaker over 12 weeks; during that time, the team containerized the inference code and built infrastructure for automating deployment. By comparison, the VBI team needed 8–12 months to develop its legacy solution. Then the VBI team conducted performance testing for another 2 months and completed its migration in April 2020 with no disruption to normal operations.

Due to the migration, Lalat Nayak, senior engineer for the VBI team at AFT, projects: "We will cut infrastructure costs by 40 percent with batch inference—just by using the features that come with HAQM SageMaker.” The team also gained a simplified system, trading in 1,000 CPU instances for one endpoint that automatically scales a fleet of fewer than 100 GPU instances. That transition to GPUs has cut the time-to-predict latency by 50 percent. The single HAQM SageMaker endpoint, which handles traffic from sensor towers across North America, Europe, and Japan, processes millions of bin images per day.

When AFT launches FCs in new regions, it now expects to deploy the HAQM SageMaker solution in 1–2 weeks, versus 1–2 months with its legacy solution. “Relying on HAQM SageMaker to host the model gives us the ability to decide whether we use the same model for all the warehouses or just some,” explains Nayak. “It helps us run experiments freely because we can release a feature for just a handful of warehouses and then expand to the rest after successful testing.”

HAQM SageMaker saves the software development engineer team about 1 month per year in maintaining infrastructure and software. It also enabled the VBI team to remove HAQM Simple Queue Service (HAQM SQS), a fully managed message-queuing service, and HAQM Simple Storage Service (HAQM S3), an object storage service, from its ML pipeline, which will save AFT 40 percent per month on AWS costs. HAQM SageMaker has scaled to handle demand spikes automatically, whereas the software development engineer team previously would have had to order more hardware, delaying implementation until its delivery.

HAQM SageMaker also reduced the load on the research wing of the VBI team, which can now launch models in pilot mode and roll out a new model in about 2 weeks, with no overhead needed to handle volume increases. And because HAQM SageMaker is ML-framework agnostic, the VBI team can layer whichever framework it wants on top of it. Converting from Caffe to Apache MXNet a few years ago took the VBI team 6–8 months, but Nayak anticipates a future conversion will be faster and simpler on HAQM SageMaker: “We want to be able to seamlessly transition to whatever technology is working best without investing a lot of time in integrating that technology into our framework. HAQM SageMaker does that for us.”

Outcome | Delivering on the HAQM Customer Promise

Using HAQM SageMaker, AFT’s VBI team reduced cost and the overhead on software developers while maximizing the efficiency of its solution. The VBI team expects that by the end of 2020, the HAQM SageMaker model will be used in all of its FCs and processing millions images in each center. With the ability to efficiently monitor its FCs to quickly find misplaced inventory, AFT can better fulfill HAQM’s promise that customers will receive their packages on time.

About HAQM

HAQM is guided by four principles: customer obsession rather than competitor focus, passion for invention, commitment to operational excellence, and long-term thinking. It has pioneered customer reviews, 1-Click ordering, personalized recommendations, Prime, Fulfillment by HAQM, AWS, Kindle Direct Publishing, Kindle, Fire tablets, Fire TV, HAQM Echo, and HAQM Alexa.

AWS Services Used

HAQM SageMaker

HAQM SageMaker is built on HAQM’s two decades of experience developing real-world ML applications, including product recommendations, personalization, intelligent shopping, robotics, and voice-assisted devices.

Learn more »

HAQM Simple Storage Service (HAQM S3)

HAQM Simple Storage Service (HAQM S3) is an object storage service offering industry-leading scalability, data availability, security, and performance.

Learn more »

HAQM Simple Queue Service (SQS)

HAQM Simple Queue Service (SQS) is a fully managed message queuing service that enables you to decouple and scale microservices, distributed systems, and serverless applications. 

Learn more »

Explore HAQM's journey of innovation using AWS

More HAQM Stories

no items found 

1

Get Started

Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today.