AWS Machine Learning Blog

Category: HAQM OpenSearch Service

Build generative AI applications quickly with HAQM Bedrock in SageMaker Unified Studio

In this post, we’ll show how anyone in your company can use HAQM Bedrock in SageMaker Unified Studio to quickly create a generative AI chat agent application that analyzes sales performance data. Through simple conversations, business teams can use the chat agent to extract valuable insights from both structured and unstructured data sources without writing code or managing complex data pipelines.

Illustration of Semantic Cache

Build a read-through semantic cache with HAQM OpenSearch Serverless and HAQM Bedrock

This post presents a strategy for optimizing LLM-based applications. Given the increasing need for efficient and cost-effective AI solutions, we present a serverless read-through caching blueprint that uses repeated data patterns. With this cache, developers can effectively save and access similar prompts, thereby enhancing their systems’ efficiency and response times.

Build cost-effective RAG applications with Binary Embeddings in HAQM Titan Text Embeddings V2, HAQM OpenSearch Serverless, and HAQM Bedrock Knowledge Bases

Today, we are happy to announce the availability of Binary Embeddings for HAQM Titan Text Embeddings V2 in HAQM Bedrock Knowledge Bases and HAQM OpenSearch Serverless. This post summarizes the benefits of this new binary vector support and gives you information on how you can get started.

Build a reverse image search engine with HAQM Titan Multimodal Embeddings in HAQM Bedrock and AWS managed services

In this post, you will learn how to extract key objects from image queries using HAQM Rekognition and build a reverse image search engine using HAQM Titan Multimodal Embeddings from HAQM Bedrock in combination with HAQM OpenSearch Serverless Service.

Super charge your LLMs with RAG at scale using AWS Glue for Apache Spark

In this post, we will explore building a reusable RAG data pipeline on LangChain—an open source framework for building applications based on LLMs—and integrating it with AWS Glue and HAQM OpenSearch Serverless. The end solution is a reference architecture for scalable RAG indexing and deployment.

Create a generative AI-based application builder assistant using HAQM Bedrock Agents

Create a generative AI-based application builder assistant using HAQM Bedrock Agents

Agentic workflows are a fresh new perspective in building dynamic and complex business use- case based workflows with the help of large language models (LLM) as their reasoning engine or brain. In this post, we set up an agent using HAQM Bedrock Agents to act as a software application builder assistant.

Create a multimodal chatbot tailored to your unique dataset with HAQM Bedrock FMs

Create a multimodal chatbot tailored to your unique dataset with HAQM Bedrock FMs

In this post, we show how to create a multimodal chat assistant on HAQM Web Services (AWS) using HAQM Bedrock models, where users can submit images and questions, and text responses will be sourced from a closed set of proprietary documents.