AWS Machine Learning Blog

Efficient Pre-training of Llama 3-like model architectures using torchtitan on HAQM SageMaker

Efficient Pre-training of Llama 3-like model architectures using torchtitan on HAQM SageMaker

In this post, we collaborate with the team working on PyTorch at Meta to showcase how the torchtitan library accelerates and simplifies the pre-training of Meta Llama 3-like model architectures. We showcase the key features and capabilities of torchtitan such as FSDP2, torch.compile integration, and FP8 support that optimize the training efficiency.

Time series forecasting with HAQM SageMaker AutoML

In this blog post, we explore a comprehensive approach to time series forecasting using the HAQM SageMaker AutoMLV2 Software Development Kit (SDK). SageMaker AutoMLV2 is part of the SageMaker Autopilot suite, which automates the end-to-end machine learning workflow from data preparation to model deployment.

Architecture diagram

Automate user on-boarding for financial services with a digital assistant powered by HAQM Bedrock

In this post, we present a solution that harnesses the power of generative AI to streamline the user onboarding process for financial services through a digital assistant.

Create your fashion assistant application using HAQM Titan models and HAQM Bedrock Agents

Create your fashion assistant application using HAQM Titan models and HAQM Bedrock Agents

In this post, we implement a fashion assistant agent using HAQM Bedrock Agents and the HAQM Titan family models. The fashion assistant provides a personalized, multimodal conversational experience.

How Aviva built a scalable, secure, and reliable MLOps platform using HAQM SageMaker

How Aviva built a scalable, secure, and reliable MLOps platform using HAQM SageMaker

In this post, we describe how Aviva built a fully serverless MLOps platform based on the AWS Enterprise MLOps Framework and HAQM SageMaker to integrate DevOps best practices into the ML lifecycle. This solution establishes MLOps practices to standardize model development, streamline ML model deployment, and provide consistent monitoring.

Implement model-independent safety measures with HAQM Bedrock Guardrails

Implement model-independent safety measures with HAQM Bedrock Guardrails

In this post, we discuss how you can use the ApplyGuardrail API in common generative AI architectures such as third-party or self-hosted large language models (LLMs), or in a self-managed Retrieval Augmented Generation (RAG) architecture.