AWS Machine Learning Blog
Tag: Generative AI
Revolutionizing clinical trials with the power of voice and AI
As the healthcare industry continues to embrace digital transformation, solutions that combine advanced technologies like audio-to-text translation and LLMs will become increasingly valuable in addressing key challenges, such as patient education, engagement, and empowerment. In this post, we discuss possible use cases for combining speech recognition technology with LLMs, and how the solution can revolutionize clinical trials.
Intelligent healthcare assistants: Empowering stakeholders with personalized support and data-driven insights
Healthcare decisions often require integrating information from multiple sources, such as medical literature, clinical databases, and patient records. LLMs lack the ability to seamlessly access and synthesize data from these diverse and distributed sources. This limits their potential to provide comprehensive and well-informed insights for healthcare applications. In this blog post, we will explore how Mistral LLM on HAQM Bedrock can address these challenges and enable the development of intelligent healthcare agents with LLM function calling capabilities, while maintaining robust data security and privacy through HAQM Bedrock Guardrails.
Benchmarking customized models on HAQM Bedrock using LLMPerf and LiteLLM
This post begins a blog series exploring DeepSeek and open FMs on HAQM Bedrock Custom Model Import. It covers the process of performance benchmarking of custom models in HAQM Bedrock using popular open source tools: LLMPerf and LiteLLM. It includes a notebook that includes step-by-step instructions to deploy a DeepSeek-R1-Distill-Llama-8B model, but the same steps apply for any other model supported by HAQM Bedrock Custom Model Import.
Evaluate RAG responses with HAQM Bedrock, LlamaIndex and RAGAS
In this post, we’ll explore how to leverage HAQM Bedrock, LlamaIndex, and RAGAS to enhance your RAG implementations. You’ll learn practical techniques to evaluate and optimize your AI systems, enabling more accurate, context-aware responses that align with your organization’s specific needs.
Accelerate AWS Well-Architected reviews with Generative AI
In this post, we explore a generative AI solution leveraging HAQM Bedrock to streamline the WAFR process. We demonstrate how to harness the power of LLMs to build an intelligent, scalable system that analyzes architecture documents and generates insightful recommendations based on AWS Well-Architected best practices. This solution automates portions of the WAFR report creation, helping solutions architects improve the efficiency and thoroughness of architectural assessments while supporting their decision-making process.
How Pattern PXM’s Content Brief is driving conversion on ecommerce marketplaces using AI
Pattern is a leader in ecommerce acceleration, helping brands navigate the complexities of selling on marketplaces and achieve profitable growth through a combination of proprietary technology and on-demand expertise. In this post, we share how Pattern uses AWS services to process trillions of data points to deliver actionable insights, optimizing product listings across multiple services.
How to configure cross-account model deployment using HAQM Bedrock Custom Model Import
In this guide, we walk you through step-by-step instructions for configuring cross-account access for HAQM Bedrock Custom Model Import, covering both non-encrypted and AWS Key Management Service (AWS KMS) based encrypted scenarios.
Accelerate IaC troubleshooting with HAQM Bedrock Agents
This post demonstrates how HAQM Bedrock Agents, combined with action groups and generative AI models, streamlines and accelerates the resolution of Terraform errors while maintaining compliance with environment security and operational guidelines.
Derive generative AI powered insights from Alation Cloud Services using HAQM Q Business Custom Connector
In this post, we showcase a sample of how Alation’s business policies can be integrated with an HAQM Q Business application using a custom data source connector.
Reducing hallucinations in LLM agents with a verified semantic cache using HAQM Bedrock Knowledge Bases
This post introduces a solution to reduce hallucinations in Large Language Models (LLMs) by implementing a verified semantic cache using HAQM Bedrock Knowledge Bases, which checks if user questions match curated and verified responses before generating new answers. The solution combines the flexibility of LLMs with reliable, verified answers to improve response accuracy, reduce latency, and lower costs while preventing potential misinformation in critical domains such as healthcare, finance, and legal services.