AWS Machine Learning Blog
Unlock the knowledge in your Slack workspace with Slack connector for HAQM Q Business
In this post, we will demonstrate how to set up Slack connector for HAQM Q Business to sync communications from both public and private channels, reflective of user permissions.
Transitioning off HAQM Lookout for Metrics
In this post, we provide an overview of the alternate AWS services that offer anomaly detection capabilities for customers to consider transitioning their workloads to.
Efficient Pre-training of Llama 3-like model architectures using torchtitan on HAQM SageMaker
In this post, we collaborate with the team working on PyTorch at Meta to showcase how the torchtitan library accelerates and simplifies the pre-training of Meta Llama 3-like model architectures. We showcase the key features and capabilities of torchtitan such as FSDP2, torch.compile integration, and FP8 support that optimize the training efficiency.
Time series forecasting with HAQM SageMaker AutoML
In this blog post, we explore a comprehensive approach to time series forecasting using the HAQM SageMaker AutoMLV2 Software Development Kit (SDK). SageMaker AutoMLV2 is part of the SageMaker Autopilot suite, which automates the end-to-end machine learning workflow from data preparation to model deployment.
Automate user on-boarding for financial services with a digital assistant powered by HAQM Bedrock
In this post, we present a solution that harnesses the power of generative AI to streamline the user onboarding process for financial services through a digital assistant.
Build a generative AI Slack chat assistant using HAQM Bedrock and HAQM Kendra
In this post, we describe the development of a generative AI Slack application powered by HAQM Bedrock and HAQM Kendra. This is designed to be an internal-facing Slack chat assistant that helps answer questions related to the indexed content.
Create your fashion assistant application using HAQM Titan models and HAQM Bedrock Agents
In this post, we implement a fashion assistant agent using HAQM Bedrock Agents and the HAQM Titan family models. The fashion assistant provides a personalized, multimodal conversational experience.
How Aviva built a scalable, secure, and reliable MLOps platform using HAQM SageMaker
In this post, we describe how Aviva built a fully serverless MLOps platform based on the AWS Enterprise MLOps Framework and HAQM SageMaker to integrate DevOps best practices into the ML lifecycle. This solution establishes MLOps practices to standardize model development, streamline ML model deployment, and provide consistent monitoring.
Visier’s data science team boosts their model output 10 times by migrating to HAQM SageMaker
In this post, we learn how Visier was able to boost their model output by 10 times, accelerate innovation cycles, and unlock new opportunities using HAQM SageMaker.
Implement model-independent safety measures with HAQM Bedrock Guardrails
In this post, we discuss how you can use the ApplyGuardrail API in common generative AI architectures such as third-party or self-hosted large language models (LLMs), or in a self-managed Retrieval Augmented Generation (RAG) architecture.