AWS Machine Learning Blog
Tag: HAQM SageMaker
AWS internal use-case: Evaluating and adopting HAQM SageMaker within AWS Marketing
We’re the AWS Marketing Data Science team. We use advanced analytical and machine learning (ML) techniques so we can share insights into business problems across the AWS customer lifecycle, such as ML-driven scoring of sales leads, ML-based targeting segments, and econometric models for downstream impact measurement. Within HAQM, each team operates independently and owns the […]
HAQM SageMaker console now supports training job cloning
Today we are launching the training job cloning feature on the HAQM SageMaker console, which makes it much easier for you to create training jobs based on existing ones. When you use HAQM SageMaker, it’s common to run multiple training jobs using different training sets and identical configuration. It’s also common to adjust a specific […]
Load test and optimize an HAQM SageMaker endpoint using automatic scaling
Once you have trained, optimized and deployed your machine learning (ML) model, the next challenge is to host it in such a way that consumers can easily invoke and get predictions from it. Many customers have consumers who are either external or internal to their organizations and want to use the model for predictions (ML […]
Using R with HAQM SageMaker
July, 2022: This post was reviewed and updated for relevancy and accuracy, with an updated AWS CloudFormation Template. December 2020: Post updated with changes required for HAQM SageMaker SDK v2 This blog post describes how to train, deploy, and retrieve predictions from a machine learning (ML) model using HAQM SageMaker and R. The model predicts abalone age […]
Using Pipe input mode for HAQM SageMaker algorithms
Today, we are introducing Pipe input mode support for the HAQM SageMaker built-in algorithms. With Pipe input mode, your dataset is streamed directly to your training instances instead of being downloaded first. This means that your training jobs start sooner, finish quicker, and need less disk space. HAQM SageMaker algorithms have been engineered to be […]
Perform a large-scale principal component analysis faster using HAQM SageMaker
In this blog post, we conduct a performance comparison for PCA using HAQM SageMaker, Spark ML, and Scikit-Learn on high-dimensional datasets. SageMaker consistently showed faster computational performance. Refer Figures (1) and (2) at the bottom to see the speed improvements. Principal Component Analysis Principal Component Analysis (PCA) is an unsupervised learning algorithm that attempts to […]
Running fast.ai notebooks with HAQM SageMaker
Update 25 JAN 2019: fast.ai has released a new version of their library and MOOC making the following blog post outdated. For the latest instructions on setting up the library and course on a SageMaker Notebook instance please refer to the instructions outlined here: http://course.fast.ai/start_sagemaker.html fast.ai is an organization dedicated to making the power of deep learning accessible […]
Build a March Madness predictor application supported by HAQM SageMaker
What an opening round of March Madness basketball tournament games! We had a buzzer beater, some historic upsets, and exciting games throughout. The model built in our first blog post (Part 1) pointed out a few likely upset candidates (Loyola IL, Butler), but did not see some coming (Marshall, UMBC). I’m sure there will be […]
Create a Word-Pronunciation sequence-to-sequence model using HAQM SageMaker
HAQM SageMaker seq2seq offers you a very simple way to make use of the state-of-the-art encoder-decoder architecture (including the attention mechanism) for your sequence to sequence tasks. You just need to prepare your sequence data in recordio-protobuf format and your vocabulary mapping files in JSON format. Then you need to upload them to HAQM Simple […]
Customize your HAQM SageMaker notebook instances with lifecycle configurations and the option to disable internet access
HAQM SageMaker provides fully managed instances running Jupyter Notebooks for data exploration and preprocessing. Customers really appreciate how easy it is to launch a pre-configured notebook instance with just one click. Today, we are making them more customizable by providing two new options: lifecycle configuration that helps automate the process of customizing your notebook instance, […]