AWS Machine Learning Blog

Category: HAQM Bedrock

Pipeline for HAQM Bedrock LLM-as-a-Judge

Evaluate healthcare generative AI applications using LLM-as-a-judge on AWS

In this post, we demonstrate how to implement this evaluation framework using HAQM Bedrock, compare the performance of different generator models, including Anthropic’s Claude and HAQM Nova on HAQM Bedrock, and showcase how to use the new RAG evaluation feature to optimize knowledge base parameters and assess retrieval quality.

How Pattern PXM’s Content Brief is driving conversion on ecommerce marketplaces using AI

Pattern is a leader in ecommerce acceleration, helping brands navigate the complexities of selling on marketplaces and achieve profitable growth through a combination of proprietary technology and on-demand expertise. In this post, we share how Pattern uses AWS services to process trillions of data points to deliver actionable insights, optimizing product listings across multiple services.

How IDIADA optimized its intelligent chatbot with HAQM Bedrock

In 2021, Applus+ IDIADA, a global partner to the automotive industry with over 30 years of experience supporting customers in product development activities through design, engineering, testing, and homologation services, established the Digital Solutions department. In this post, we showcase the research process undertaken to develop a classifier for human interactions in this AI-based environment using HAQM Bedrock.

Orchestrate an intelligent document processing workflow using tools in HAQM Bedrock

This intelligent document processing solution uses HAQM Bedrock FMs to orchestrate a sophisticated workflow for handling multi-page healthcare documents with mixed content types. The solution uses the FM’s tool use capabilities, accessed through the HAQM Bedrock Converse API. This enables the FMs to not just process text, but to actively engage with various external tools and APIs to perform complex document analysis tasks.

Reducing hallucinations in LLM agents with a verified semantic cache using HAQM Bedrock Knowledge Bases

This post introduces a solution to reduce hallucinations in Large Language Models (LLMs) by implementing a verified semantic cache using HAQM Bedrock Knowledge Bases, which checks if user questions match curated and verified responses before generating new answers. The solution combines the flexibility of LLMs with reliable, verified answers to improve response accuracy, reduce latency, and lower costs while preventing potential misinformation in critical domains such as healthcare, finance, and legal services.

Generate synthetic counterparty (CR) risk data with generative AI using HAQM Bedrock LLMs and RAG

In this post, we explore how you can use LLMs with advanced Retrieval Augmented Generation (RAG) to generate high-quality synthetic data for a finance domain use case. You can use the same technique for synthetic data for other business domain use cases as well. For this post, we demonstrate how to generate counterparty risk (CR) data, which would be beneficial for over-the-counter (OTC) derivatives that are traded directly between two parties, without going through a formal exchange.

Turbocharging premium audit capabilities with the power of generative AI: Verisk’s journey toward a sophisticated conversational chat platform to enhance customer support

Verisk’s Premium Audit Advisory Service is the leading source of technical information and training for premium auditors and underwriters. In this post, we describe the development of the customer support process in PAAS, incorporating generative AI, the data, the architecture, and the evaluation of the results. Conversational AI assistants are rapidly transforming customer and employee support.