AWS Machine Learning Blog
Category: HAQM SageMaker
Integrate HyperPod clusters with Active Directory for seamless multi-user login
HAQM SageMaker HyperPod is purpose-built to accelerate foundation model (FM) training, removing the undifferentiated heavy lifting involved in managing and optimizing a large training compute cluster. With SageMaker HyperPod, you can train FMs for weeks and months without disruption. Typically, HyperPod clusters are used by multiple users: machine learning (ML) researchers, software engineers, data scientists, […]
The executive’s guide to generative AI for sustainability
Organizations are facing ever-increasing requirements for sustainability goals alongside environmental, social, and governance (ESG) practices. A Gartner, Inc. survey revealed that 87 percent of business leaders expect to increase their organization’s investment in sustainability over the next years. This post serves as a starting point for any executive seeking to navigate the intersection of generative […]
Use Kubernetes Operators for new inference capabilities in HAQM SageMaker that reduce LLM deployment costs by 50% on average
We are excited to announce a new version of the HAQM SageMaker Operators for Kubernetes using the AWS Controllers for Kubernetes (ACK). ACK is a framework for building Kubernetes custom controllers, where each controller communicates with an AWS service API. These controllers allow Kubernetes users to provision AWS resources like buckets, databases, or message queues […]
Meta Llama 3 models are now available in HAQM SageMaker JumpStart
May 2024: This post was reviewed and updated with support for finetuning. Today, we are excited to announce that Meta Llama 3 foundation models are available through HAQM SageMaker JumpStart to deploy, run inference and fine tune. The Llama 3 models are a collection of pre-trained and fine-tuned generative text models. The Llama 3 Instruct fine-tuned […]
Slack delivers native and secure generative AI powered by HAQM SageMaker JumpStart
We are excited to announce that Slack, a Salesforce company, has collaborated with HAQM SageMaker JumpStart to power Slack AI’s initial search and summarization features and provide safeguards for Slack to use large language models (LLMs) more securely. Slack worked with SageMaker JumpStart to host industry-leading third-party LLMs so that data is not shared with the infrastructure owned by third party model providers. This keeps customer data in Slack at all times and upholds the same security practices and compliance standards that customers expect from Slack itself.
Explore data with ease: Use SQL and Text-to-SQL in HAQM SageMaker Studio JupyterLab notebooks
HAQM SageMaker Studio provides a fully managed solution for data scientists to interactively build, train, and deploy machine learning (ML) models. In the process of working on their ML tasks, data scientists typically start their workflow by discovering relevant data sources and connecting to them. They then use SQL to explore, analyze, visualize, and integrate […]
Distributed training and efficient scaling with the HAQM SageMaker Model Parallel and Data Parallel Libraries
In this post, we explore the performance benefits of HAQM SageMaker (including SMP and SMDDP), and how you can use the library to train large models efficiently on SageMaker. We demonstrate the performance of SageMaker with benchmarks on ml.p4d.24xlarge clusters up to 128 instances, and FSDP mixed precision with bfloat16 for the Llama 2 model.
Build an active learning pipeline for automatic annotation of images with AWS services
This blog post is co-written with Caroline Chung from Veoneer. Veoneer is a global automotive electronics company and a world leader in automotive electronic safety systems. They offer best-in-class restraint control systems and have delivered over 1 billion electronic control units and crash sensors to car manufacturers globally. The company continues to build on a […]
Build knowledge-powered conversational applications using LlamaIndex and Llama 2-Chat
Unlocking accurate and insightful answers from vast amounts of text is an exciting capability enabled by large language models (LLMs). When building LLM applications, it is often necessary to connect and query external data sources to provide relevant context to the model. One popular approach is using Retrieval Augmented Generation (RAG) to create Q&A systems […]
Use everyday language to search and retrieve data with Mixtral 8x7B on HAQM SageMaker JumpStart
With the widespread adoption of generative artificial intelligence (AI) solutions, organizations are trying to use these technologies to make their teams more productive. One exciting use case is enabling natural language interactions with relational databases. Rather than writing complex SQL queries, you can describe in plain language what data you want to retrieve or manipulate. […]