AWS Machine Learning Blog
Category: HAQM Bedrock
Build a serverless audio summarization solution with HAQM Bedrock and Whisper
In this post, we demonstrate how to use the Open AI Whisper foundation model (FM) Whisper Large V3 Turbo, available in HAQM Bedrock Marketplace, which offers access to over 140 models through a dedicated offering, to produce near real-time transcription. These transcriptions are then processed by HAQM Bedrock for summarization and redaction of sensitive information.
Build a Text-to-SQL solution for data consistency in generative AI using HAQM Nova
This post evaluates the key options for querying data using generative AI, discusses their strengths and limitations, and demonstrates why Text-to-SQL is the best choice for deterministic, schema-specific tasks. We show how to effectively use Text-to-SQL using HAQM Nova, a foundation model (FM) available in HAQM Bedrock, to derive precise and reliable answers from your data.
Contextual retrieval in Anthropic using HAQM Bedrock Knowledge Bases
Contextual retrieval enhances traditional RAG by adding chunk-specific explanatory context to each chunk before generating embeddings. This approach enriches the vector representation with relevant contextual information, enabling more accurate retrieval of semantically related content when responding to user queries. In this post, we demonstrate how to use contextual retrieval with Anthropic and HAQM Bedrock Knowledge Bases.
Supercharge your development with Claude Code and HAQM Bedrock prompt caching
In this post, we’ll explore how to combine HAQM Bedrock prompt caching with Claude Code—a coding agent released by Anthropic that is now generally available. This powerful combination transforms your development workflow by delivering lightning-fast responses from reducing inference response latency, as well as lowering input token costs.
Build a scalable AI assistant to help refugees using AWS
The Danish humanitarian organization Bevar Ukraine has developed a comprehensive virtual generative AI-powered assistant called Victor, aimed at addressing the pressing needs of Ukrainian refugees integrating into Danish society. This post details our technical implementation using AWS services to create a scalable, multilingual AI assistant system that provides automated assistance while maintaining data security and GDPR compliance.
Enhanced diagnostics flow with LLM and HAQM Bedrock agent integration
In this post, we explore how Noodoe uses AI and HAQM Bedrock to optimize EV charging operations. By integrating LLMs, Noodoe enhances station diagnostics, enables dynamic pricing, and delivers multilingual support. These innovations reduce downtime, maximize efficiency, and improve sustainability. Read on to discover how AI is transforming EV charging management.
Build GraphRAG applications using HAQM Bedrock Knowledge Bases
In this post, we explore how to use Graph-based Retrieval-Augmented Generation (GraphRAG) in HAQM Bedrock Knowledge Bases to build intelligent applications. Unlike traditional vector search, which retrieves documents based on similarity scores, knowledge graphs encode relationships between entities, allowing large language models (LLMs) to retrieve information with context-aware reasoning.
Fast-track SOP processing using HAQM Bedrock
When a regulatory body like the US Food and Drug Administration (FDA) introduces changes to regulations, organizations are required to evaluate the changes against their internal SOPs. When necessary, they must update their SOPs to align with the regulation changes and maintain compliance. In this post, we show different approaches using HAQM Bedrock to identify relationships between regulation changes and SOPs.
How ZURU improved the accuracy of floor plan generation by 109% using HAQM Bedrock and HAQM SageMaker
ZURU collaborated with AWS Generative AI Innovation Center and AWS Professional Services to implement a more accurate text-to-floor plan generator using generative AI. In this post, we show you why a solution using a large language model (LLM) was chosen. We explore how model selection, prompt engineering, and fine-tuning can be used to improve results.
Going beyond AI assistants: Examples from HAQM.com reinventing industries with generative AI
Non-conversational applications offer unique advantages such as higher latency tolerance, batch processing, and caching, but their autonomous nature requires stronger guardrails and exhaustive quality assurance compared to conversational applications, which benefit from real-time user feedback and supervision. This post examines four diverse HAQM.com examples of such generative AI applications.