AWS Machine Learning Blog
Category: Intermediate (200)
Deploy generative AI agents in your contact center for voice and chat using HAQM Connect, HAQM Lex, and HAQM Bedrock Knowledge Bases
In this post, we show you how DoorDash built a generative AI agent using HAQM Connect, HAQM Lex, and HAQM Bedrock Knowledge Bases to provide a low-latency, self-service experience for their delivery workers.
Enhancing Just Walk Out technology with multi-modal AI
In this post, we showcase the latest generation of Just Walk Out technology by HAQM, powered by a multi-modal foundation model (FM). We designed this multi-modal FM for physical stores using a transformer-based architecture similar to that underlying many generative artificial intelligence (AI) applications.
Integrate dynamic web content in your generative AI application using a web search API and HAQM Bedrock Agents
In this post, we demonstrate how to use HAQM Bedrock Agents with a web search API to integrate dynamic web content in your generative AI application.
CRISPR-Cas9 guide RNA efficiency prediction with efficiently tuned models in HAQM SageMaker
The clustered regularly interspaced short palindromic repeat (CRISPR) technology holds the promise to revolutionize gene editing technologies, which is transformative to the way we understand and treat diseases. This technique is based in a natural mechanism found in bacteria that allows a protein coupled to a single guide RNA (gRNA) strand to locate and make […]
Build a RAG-based QnA application using Llama3 models from SageMaker JumpStart
In this post, we provide a step-by-step guide for creating an enterprise ready RAG application such as a question answering bot. We use the Llama3-8B FM for text generation and the BGE Large EN v1.5 text embedding model for generating embeddings from HAQM SageMaker JumpStart.
Best prompting practices for using Meta Llama 3 with HAQM SageMaker JumpStart
In this post, we dive into the best practices and techniques for prompting Meta Llama 3 using HAQM SageMaker JumpStart to generate high-quality, relevant outputs. We discuss how to use system prompts and few-shot examples, and how to optimize inference parameters, so you can get the most out of Meta Llama 3.
Introducing HAQM EKS support in HAQM SageMaker HyperPod
This post is designed for Kubernetes cluster administrators and ML scientists, providing an overview of the key features that SageMaker HyperPod introduces to facilitate large-scale model training on an EKS cluster.
A review of purpose-built accelerators for financial services
In this post, we aim to provide business leaders with a non-technical overview of purpose-built accelerators (PBAs) and their role within the financial services industry (FSI).
How Vidmob is using generative AI to transform its creative data landscape
In this post, we illustrate how Vidmob, a creative data company, worked with the AWS Generative AI Innovation Center (GenAIIC) team to uncover meaningful insights at scale within creative data using HAQM Bedrock.
Evaluating prompts at scale with Prompt Management and Prompt Flows for HAQM Bedrock
In this post, we demonstrate how to implement an automated prompt evaluation system using HAQM Bedrock so you can streamline your prompt development process and improve the overall quality of your AI-generated content.