AWS Machine Learning Blog
Build a computer vision-based asset inventory application with low or no training
In this post, we present a solution using generative AI and large language models (LLMs) to alleviate the time-consuming and labor-intensive tasks required to build a computer vision application, enabling you to immediately start taking pictures of your asset labels and extract the necessary information to update the inventory using AWS services
Clario enhances the quality of the clinical trial documentation process with HAQM Bedrock
The collaboration between Clario and AWS demonstrated the potential of AWS AI and machine learning (AI/ML) services and generative AI models, such as Anthropic’s Claude, to streamline document generation processes in the life sciences industry and, specifically, for complicated clinical trial processes.
Optimizing Mixtral 8x7B on HAQM SageMaker with AWS Inferentia2
This post demonstrates how to deploy and serve the Mixtral 8x7B language model on AWS Inferentia2 instances for cost-effective, high-performance inference. We’ll walk through model compilation using Hugging Face Optimum Neuron, which provides a set of tools enabling straightforward model loading, training, and inference, and the Text Generation Inference (TGI) Container, which has the toolkit for deploying and serving LLMs with Hugging Face.
Elevate business productivity with HAQM Q and HAQM Connect
In this post, we demonstrate how to elevate business productivity by leveraging HAQM Q to provide insights that enable research, data analysis, and report potential fraud cases within HAQM Connect.
Build multi-agent systems with LangGraph and HAQM Bedrock
This post demonstrates how to integrate open-source multi-agent framework, LangGraph, with HAQM Bedrock. It explains how to use LangGraph and HAQM Bedrock to build powerful, interactive multi-agent applications that use graph-based orchestration.
Dynamic text-to-SQL for enterprise workloads with HAQM Bedrock Agents
This post demonstrates how enterprises can implement a scalable agentic text-to-SQL solution using HAQM Bedrock Agents, with advanced error-handling tools and automated schema discovery to enhance database query efficiency.
Building an AIOps chatbot with HAQM Q Business custom plugins
In this post, we demonstrate how you can use custom plugins for HAQM Q Business to build a chatbot that can interact with multiple APIs using natural language prompts. We showcase how to build an AIOps chatbot that enables users to interact with their AWS infrastructure through natural language queries and commands. The chatbot is capable of handling tasks such as querying the data about HAQM Elastic Compute Cloud (HAQM EC2) ports and HAQM Simple Storage Service (HAQM S3) buckets access settings.
How TransPerfect Improved Translation Quality and Efficiency Using HAQM Bedrock
This post describes how the AWS Customer Channel Technology – Localization Team worked with TransPerfect to integrate HAQM Bedrock into the GlobalLink translation management system, a cloud-based solution designed to help organizations manage their multilingual content and translation workflows. Organizations use TransPerfect’s solution to rapidly create and deploy content at scale in multiple languages using AI.
Racing beyond DeepRacer: Debut of the AWS LLM League
The AWS LLM League was designed to lower the barriers to entry in generative AI model customization by providing an experience where participants, regardless of their prior data science experience, could engage in fine-tuning LLMs. Using HAQM SageMaker JumpStart, attendees were guided through the process of customizing LLMs to address real business challenges adaptable to their domain.
Reduce ML training costs with HAQM SageMaker HyperPod
In this post, we explore the challenges of large-scale frontier model training, focusing on hardware failures and the benefits of HAQM SageMaker HyperPod – a solution that minimizes disruptions, enhances efficiency, and reduces training costs.