AWS Machine Learning Blog

Tag: HAQM SageMaker

Predict March Madness using HAQM Sagemaker

It’s mid-March and in the United States that can mean only one thing – it’s time for March Madness! Every year countless people fill out a bracket trying to pick which college basketball team will take it all. Do you have a favorite team to win in 2018? In this blog post, we’ll show you […]

Use HAQM CloudWatch custom metrics for real-time monitoring of HAQM Sagemaker model performance

The training and learning process of deep learning (DL) models can be expensive and time consuming. It’s important for data scientists to monitor the model metrics, such as the training accuracy, training loss, validation accuracy, and validation loss, and make informed decisions based on those metrics. In this blog post, I’ll show you how to […]

Deploy Gluon models to AWS DeepLens using a simple Python API

April 2023 Update: Starting January 31, 2024, you will no longer be able to access AWS DeepLens through the AWS management console, manage DeepLens devices, or access any projects you have created. To learn more, refer to these frequently asked questions about AWS DeepLens end of life. Today we are excited to announce that you can […]

Train and host Scikit-Learn models in HAQM SageMaker by building a Scikit Docker container

Introduced at re:Invent 2017, HAQM SageMaker provides a serverless data science environment to build, train, and deploy machine learning models at scale. Customers also have the ability to work with frameworks they find most familiar, such as Scikit learn. In this blog post, we’ll accomplish two goals: First, we’ll give you a high-level overview of […]

HAQM SageMaker support for TensorFlow 1.5, MXNet 1.0, and CUDA 9

HAQM SageMaker pre-built deep learning framework containers now support TensorFlow 1.5 and Apache MXNet 1.0, both of which take advantage of CUDA 9 optimizations for faster performance on SageMaker ml.p3 instances. In addition to performance benefits, this provides access to updated features such as Eager execution in TensorFlow and advanced indexing for NDArrays in MXNet. More […]

Build an online compound solubility prediction workflow with AWS Batch and HAQM SageMaker

Machine learning (ML) methods for the field of computational chemistry are growing at an accelerated rate. Easy access to open-source solvers (such as TensorFlow and Apache MXNet), toolkits (such as RDKit cheminformatics software), and open-scientific initiatives (such as DeepChem) makes it easy to use these frameworks in daily research. In the field of chemical informatics, many […]

Build your own object classification model in SageMaker and import it to DeepLens

April 2023 Update: Starting January 31, 2024, you will no longer be able to access AWS DeepLens through the AWS management console, manage DeepLens devices, or access any projects you have created. To learn more, refer to these frequently asked questions about AWS DeepLens end of life. We are excited to launch a new feature for […]

HAQM SageMaker BlazingText: Parallelizing Word2Vec on Multiple CPUs or GPUs

Today we’re launching HAQM SageMaker BlazingText as the latest built-in algorithm for HAQM SageMaker. BlazingText is an unsupervised learning algorithm for generating Word2Vec embeddings. These are dense vector representations of words in large corpora. We’re excited to make BlazingText, the fastest implementation of Word2Vec, available to HAQM SageMaker users on: Single CPU instances (like the […]

AWS KMS-based Encryption Is Now Available for Training and Hosting in HAQM SageMaker

HAQM SageMaker uses throwaway keys, also called transient keys, to encrypt the ML General Purpose storage volumes attached to training and hosting EC2 instances. Because these keys are used only to encrypt the ML storage volumes and are then immediately discarded, the volumes can safely be used to store confidential data. Volumes can be accessed […]

Making neural nets uncool again – AWS style

Just as the goal of HAQM AI is to democratize machine learning with the development of platforms such as HAQM SageMaker, the goal of fast.ai is to level the educational playing field so that anyone can pick up machine learning and be productive. The fast.ai tagline is “Making neural nets uncool again.” This is not a play to decrease the popularity of deep neural networks, but instead to broaden their appeal and accessibility beyond the academic elites who have dominated the research in this area.