AWS Machine Learning Blog

Category: Intermediate (200)

Combine keyword and semantic search for text and images using HAQM Bedrock and HAQM OpenSearch Service

In this post, we walk you through how to build a hybrid search solution using OpenSearch Service powered by multimodal embeddings from the HAQM Titan Multimodal Embeddings G1 model through HAQM Bedrock. This solution demonstrates how you can enable users to submit both text and images as queries to retrieve relevant results from a sample retail image dataset.

Accuracy evaluation framework for HAQM Q Business – Part 2

In the first post of this series, we introduced a comprehensive evaluation framework for HAQM Q Business, a fully managed Retrieval Augmented Generation (RAG) solution that uses your company’s proprietary data without the complexity of managing large language models (LLMs). The first post focused on selecting appropriate use cases, preparing data, and implementing metrics to […]

Use HAQM Bedrock Intelligent Prompt Routing for cost and latency benefits

Today, we’re happy to announce the general availability of HAQM Bedrock Intelligent Prompt Routing. In this blog post, we detail various highlights from our internal testing, how you can get started, and point out some caveats and best practices. We encourage you to incorporate HAQM Bedrock Intelligent Prompt Routing into your new and existing generative AI applications.

The future of quality assurance: Shift-left testing with QyrusAI and HAQM Bedrock

In this post, we explore how QyrusAI and HAQM Bedrock are revolutionizing shift-left testing, enabling teams to deliver better software faster. HAQM Bedrock is a fully managed service that allows businesses to build and scale generative AI applications using foundation models (FMs) from leading AI providers. It enables seamless integration with AWS services, offering customization, security, and scalability without managing infrastructure.

Workflow Diagram: 1. Import your user, item, and interaction data into HAQM Personalize. 2. Train an HAQM Personalize “Top pics for you” recommender. 3. Get the top recommended movies for each user. 4. Use a prompt template, the recommended movies, and the user demographics to generate the model prompt. 5. Use HAQM Bedrock LLMs to generate personalized outbound communication with the prompt. 6. Share the personalize outbound communication with each of your users.

Generate user-personalized communication with HAQM Personalize and HAQM Bedrock

In this post, we demonstrate how to use HAQM Personalize and HAQM Bedrock to generate personalized outreach emails for individual users using a video-on-demand use case. This concept can be applied to other domains, such as compelling customer experiences for ecommerce and digital marketing use cases.

Automating regulatory compliance: A multi-agent solution using HAQM Bedrock and CrewAI

In this post, we explore how AI agents can streamline compliance and fulfill regulatory requirements for financial institutions using HAQM Bedrock and CrewAI. We demonstrate how to build a multi-agent system that can automatically summarize new regulations, assess their impact on operations, and provide prescriptive technical guidance. You’ll learn how to use HAQM Bedrock Knowledge Bases and HAQM Bedrock Agents with CrewAI to create a comprehensive, automated compliance solution.

Implement human-in-the-loop confirmation with HAQM Bedrock Agents

In this post, we focus specifically on enabling end-users to approve actions and provide feedback using built-in HAQM Bedrock Agents features, specifically HITL patterns for providing safe and effective agent operations. We explore the patterns available using a Human Resources (HR) agent example that helps employees requesting time off.

Using Large Language Models on HAQM Bedrock for multi-step task execution

This post explores the application of LLMs in executing complex analytical queries through an API, with specific focus on HAQM Bedrock. To demonstrate this process, we present a use case where the system identifies the patient with the least number of vaccines by retrieving, grouping, and sorting data, and ultimately presenting the final result.

Integrating custom dependencies in HAQM SageMaker Canvas workflows

When implementing machine learning workflows in HAQM SageMaker Canvas, organizations might need to consider external dependencies required for their specific use cases. Although SageMaker Canvas provides powerful no-code and low-code capabilities for rapid experimentation, some projects might require specialized dependencies and libraries that aren’t included by default in SageMaker Canvas. This post provides an example of how to incorporate code that relies on external dependencies into your SageMaker Canvas workflows.

HAQM Bedrock launches Session Management APIs for generative AI applications (Preview)

HAQM Bedrock announces the preview launch of Session Management APIs, a new capability that enables developers to simplify state and context management for generative AI applications built with popular open source frameworks such as LangGraph and LlamaIndex. Session Management APIs provide an out-of-the-box solution that enables developers to securely manage state and conversation context across […]