AWS Machine Learning Blog
Category: HAQM SageMaker Neo
Demystifying machine learning at the edge through real use cases
October 2023: Starting in April 26th, 2024, you can no longer access HAQM SageMaker Edge Manager. For more information about continuing to deploy your models to edge devices, see SageMaker Edge Manager end of life. Edge is a term that refers to a location, far from the cloud or a big data center, where you […]
ML inferencing at the edge with HAQM SageMaker Edge and Ambarella CV25
Ambarella builds computer vision SoCs (system on chips) based on a very efficient AI chip architecture and CVflow that provides the Deep Neural Network (DNN) processing required for edge inferencing use cases like intelligent home monitoring and smart surveillance cameras. Developers convert models trained with frameworks (such as TensorFlow or MXNET) to Ambarella CVflow format […]
Unlock near 3x performance gains with XGBoost and HAQM SageMaker Neo
October 2021: This post has been updated with a new sample notebook for HAQM SageMaker Studio users. When a model gets deployed to a production environment, inference speed matters. Models with fast inference speeds require less resources to run, which translates to cost savings, and applications that consume the models’ predictions benefit from the improved […]
Build reusable, serverless inference functions for your HAQM SageMaker models using AWS Lambda layers and containers
July 2023: This post was reviewed for accuracy. Please refer to Deploying ML models using SageMaker Serverless Inference, a new inference option that enables you to easily deploy machine learning models for inference without having to configure or manage the underlying infrastructure. In AWS, you can host a trained model multiple ways, such as via […]
Reduce ML inference costs on HAQM SageMaker with hardware and software acceleration
HAQM SageMaker is a fully-managed service that enables data scientists and developers to build, train, and deploy machine learning (ML) models at 50% lower TCO than self-managed deployments on Elastic Compute Cloud (HAQM EC2). Elastic Inference is a capability of SageMaker that delivers 20% better performance for model inference than AWS Deep Learning Containers on […]
Monitor and Manage Anomaly Detection Models on a fleet of Wind Turbines with HAQM SageMaker Edge Manager
September 8, 2021: HAQM Elasticsearch Service has been renamed to HAQM OpenSearch Service. See details. In industrial IoT, running machine learning (ML) models on edge devices is necessary for many use cases, such as predictive maintenance, quality improvement, real-time monitoring, process optimization, and security. The energy industry, for instance, invests heavily in ML to automate […]
New HAQM SageMaker Neo features to run more models faster and more efficiently on more hardware platforms
HAQM SageMaker Neo enables developers to train machine learning (ML) models once and optimize them to run on any HAQM SageMaker endpoints in the cloud and supported devices at the edge. Since Neo was first announced at re:Invent 2018, we have been continuously working with the Neo-AI open-source communities and several hardware partners to increase […]
Model dynamism Support in HAQM SageMaker Neo
HAQM SageMaker Neo was launched at AWS re:Invent 2018. It made notable performance improvement on models with statically known input and output data shapes, typically image classification models. These models are usually composed of a stack of blocks that contain compute-intensive operators, such as convolution and matrix multiplication. Neo applies a series of optimizations to […]
HAQM SageMaker Neo makes it easier to get faster inference for more ML models with NVIDIA TensorRT
HAQM SageMaker Neo now uses the NVIDIA TensorRT acceleration library to increase the speedup of machine learning (ML) models on NVIDIA Jetson devices at the edge and AWS g4dn and p3 instances in the AWS Cloud. Neo compiles models from TensorFlow, TFLite, MXNet, PyTorch, ONNX, and DarkNet to make optimal use of NVIDIA GPUs, providing […]
Optimizing ML models for iOS and MacOS devices with HAQM SageMaker Neo and Core ML
Core ML is a machine learning (ML) model format created and supported by Apple that compiles, deploys, and runs on Apple devices. Developers who train their models in popular frameworks such as TensorFlow and PyTorch convert models to Core ML format to deploy them on Apple devices. AWS has automated the model conversion to Core […]