AWS Machine Learning Blog
Tag: Generative AI
Effectively manage foundation models for generative AI applications with HAQM SageMaker Model Registry
In this post, we explore the new features of Model Registry that streamline foundation model (FM) management: you can now register unzipped model artifacts and pass an End User License Agreement (EULA) acceptance flag without needing users to intervene.
Build an ecommerce product recommendation chatbot with HAQM Bedrock Agents
In this post, we show you how to build an ecommerce product recommendation chatbot using HAQM Bedrock Agents and foundation models (FMs) available in HAQM Bedrock.
Implementing tenant isolation using Agents for HAQM Bedrock in a multi-tenant environment
In this blog post, we will show you how to implement tenant isolation using HAQM Bedrock agents within a multi-tenant environment. We’ll demonstrate this using a sample multi-tenant e-commerce application that provides a service for various tenants to create online stores. This application will use HAQM Bedrock agents to develop an AI assistant or chatbot capable of providing tenant-specific information, such as return policies and user-specific information like order counts and status updates.
Building automations to accelerate remediation of AWS Security Hub control findings using HAQM Bedrock and AWS Systems Manager
In this post, we will harness the power of generative artificial intelligence (AI) and HAQM Bedrock to help organizations simplify and effectively manage remediations of AWS Security Hub control findings.
Analyze customer reviews using HAQM Bedrock
This post explores an innovative application of large language models (LLMs) to automate the process of customer review analysis. LLMs are a type of foundation model (FM) that have been pre-trained on vast amounts of text data. This post discusses how LLMs can be accessed through HAQM Bedrock to build a generative AI solution that automatically summarizes key information, recognizes the customer sentiment, and generates actionable insights from customer reviews. This method shows significant promise in saving human analysts time while producing high-quality results. We examine the approach in detail, provide examples, highlight key benefits and limitations, and discuss future opportunities for more advanced product review summarization through generative AI.
Accuracy evaluation framework for HAQM Q Business
Generative artificial intelligence (AI), particularly Retrieval Augmented Generation (RAG) solutions, are rapidly demonstrating their vast potential to revolutionize enterprise operations. RAG models combine the strengths of information retrieval systems with advanced natural language generation, enabling more contextually accurate and informative outputs. From automating customer interactions to optimizing backend operation processes, these technologies are not just […]
How Twilio generated SQL using Looker Modeling Language data with HAQM Bedrock
As one of the largest AWS customers, Twilio engages with data, artificial intelligence (AI), and machine learning (ML) services to run their daily workloads. This post highlights how Twilio enabled natural language-driven data exploration of business intelligence (BI) data with RAG and HAQM Bedrock.
Inference AudioCraft MusicGen models using HAQM SageMaker
Music generation models have emerged as powerful tools that transform natural language text into musical compositions. Originating from advancements in artificial intelligence (AI) and deep learning, these models are designed to understand and translate descriptive text into coherent, aesthetically pleasing music. Their ability to democratize music production allows individuals without formal training to create high-quality […]
Faster LLMs with speculative decoding and AWS Inferentia2
In recent years, we have seen a big increase in the size of large language models (LLMs) used to solve natural language processing (NLP) tasks such as question answering and text summarization. Larger models with more parameters, which are in the order of hundreds of billions at the time of writing, tend to produce better […]
Import a fine-tuned Meta Llama 3 model for SQL query generation on HAQM Bedrock
In this post, we demonstrate the process of fine-tuning Meta Llama 3 8B on SageMaker to specialize it in the generation of SQL queries (text-to-SQL). Meta Llama 3 8B is a relatively small model that offers a balance between performance and resource efficiency. AWS customers have explored fine-tuning Meta Llama 3 8B for the generation of SQL queries—especially when using non-standard SQL dialects—and have requested methods to import their customized models into HAQM Bedrock to benefit from the managed infrastructure and security that HAQM Bedrock provides when serving those models.