AWS Machine Learning Blog
Category: HAQM Bedrock
Enterprise-grade natural language to SQL generation using LLMs: Balancing accuracy, latency, and scale
In this post, the AWS and Cisco teams unveil a new methodical approach that addresses the challenges of enterprise-grade SQL generation. The teams were able to reduce the complexity of the NL2SQL process while delivering higher accuracy and better overall performance.
AWS Field Experience reduced cost and delivered low latency and high performance with HAQM Nova Lite foundation model
The AFX team’s product migration to the Nova Lite model has delivered tangible enterprise value by enhancing sales workflows. By migrating to the HAQM Nova Lite model, the team has not only achieved significant cost savings and reduced latency, but has also empowered sellers with a leading intelligent and reliable solution.
Combine keyword and semantic search for text and images using HAQM Bedrock and HAQM OpenSearch Service
In this post, we walk you through how to build a hybrid search solution using OpenSearch Service powered by multimodal embeddings from the HAQM Titan Multimodal Embeddings G1 model through HAQM Bedrock. This solution demonstrates how you can enable users to submit both text and images as queries to retrieve relevant results from a sample retail image dataset.
Protect sensitive data in RAG applications with HAQM Bedrock
In this post, we explore two approaches for securing sensitive data in RAG applications using HAQM Bedrock. The first approach focused on identifying and redacting sensitive data before ingestion into an HAQM Bedrock knowledge base, and the second demonstrated a fine-grained RBAC pattern for managing access to sensitive information during retrieval. These solutions represent just two possible approaches among many for securing sensitive data in generative AI applications.
Use HAQM Bedrock Intelligent Prompt Routing for cost and latency benefits
Today, we’re happy to announce the general availability of HAQM Bedrock Intelligent Prompt Routing. In this blog post, we detail various highlights from our internal testing, how you can get started, and point out some caveats and best practices. We encourage you to incorporate HAQM Bedrock Intelligent Prompt Routing into your new and existing generative AI applications.
How Infosys improved accessibility for Event Knowledge using HAQM Nova Pro, HAQM Bedrock and HAQM Elemental Media Services
In this post, we explore how Infosys developed Infosys Event AI to unlock the insights generated from events and conferences. Through its suite of features—including real-time transcription, intelligent summaries, and an interactive chat assistant—Infosys Event AI makes event knowledge accessible and provides an immersive engagement solution for the attendees, during and after the event.
HAQM Bedrock Prompt Optimization Drives LLM Applications Innovation for Yuewen Group
Today, we are excited to announce the availability of Prompt Optimization on HAQM Bedrock. With this capability, you can now optimize your prompts for several use cases with a single API call or a click of a button on the HAQM Bedrock console. In this blog post, we discuss how Prompt Optimization improves the performance of large language models (LLMs) for intelligent text processing task in Yuewen Group.
Build a location-aware agent using HAQM Bedrock Agents and Foursquare APIs
In this post, we combine HAQM Bedrock Agents and Foursquare APIs to demonstrate how you can use a location-aware agent to bring personalized responses to your users.
Build an automated generative AI solution evaluation pipeline with HAQM Nova
In this post, we explore the importance of evaluating LLMs in the context of generative AI applications, highlighting the challenges posed by issues like hallucinations and biases. We introduced a comprehensive solution using AWS services to automate the evaluation process, allowing for continuous monitoring and assessment of LLM performance. By using tools like the FMeval Library, Ragas, LLMeter, and Step Functions, the solution provides flexibility and scalability, meeting the evolving needs of LLM consumers.
Build a FinOps agent using HAQM Bedrock with multi-agent capability and HAQM Nova as the foundation model
In this post, we use the multi-agent feature of HAQM Bedrock to demonstrate a powerful and innovative approach to AWS cost management. By using the advanced capabilities of HAQM Nova FMs, we’ve developed a solution that showcases how AI-driven agents can revolutionize the way organizations analyze, optimize, and manage their AWS costs.