AWS Machine Learning Blog
Category: AWS Lambda
Generate and evaluate images in HAQM Bedrock with HAQM Nova Canvas and Anthropic Claude 3.5 Sonnet
In this post, we demonstrate how to interact with the HAQM Titan Image Generator G1 v2 model on HAQM Bedrock to generate an image. Then, we show you how to use Anthropic’s Claude 3.5 Sonnet on HAQM Bedrock to describe it, evaluate it with a score from 1–10, explain the reason behind the given score, and suggest improvements to the image.
Create a generative AI–powered custom Google Chat application using HAQM Bedrock
AWS offers powerful generative AI services, including HAQM Bedrock, which allows organizations to create tailored use cases such as AI chat-based assistants that give answers based on knowledge contained in the customers’ documents, and much more. Many businesses want to integrate these cutting-edge AI capabilities with their existing collaboration tools, such as Google Chat, to […]
Automate HAQM Bedrock batch inference: Building a scalable and efficient pipeline
Although batch inference offers numerous benefits, it’s limited to 10 batch inference jobs submitted per model per Region. To address this consideration and enhance your use of batch inference, we’ve developed a scalable solution using AWS Lambda and HAQM DynamoDB. This post guides you through implementing a queue management system that automatically monitors available job slots and submits new jobs as slots become available.
Deploy a serverless web application to edit images using HAQM Bedrock
In this post, we explore a sample solution that you can use to deploy an image editing application by using AWS serverless services and generative AI services. We use HAQM Bedrock and an HAQM Titan FM that allow you to edit images by using prompts.
Create a multimodal chatbot tailored to your unique dataset with HAQM Bedrock FMs
In this post, we show how to create a multimodal chat assistant on HAQM Web Services (AWS) using HAQM Bedrock models, where users can submit images and questions, and text responses will be sourced from a closed set of proprietary documents.
Improve employee productivity using generative AI with HAQM Bedrock
In this post, we show you the Employee Productivity GenAI Assistant Example, a solution built on AWS technologies like HAQM Bedrock, to automate writing tasks and enhance employee productivity.
Accelerate performance using a custom chunking mechanism with HAQM Bedrock
This post explores how Accenture used the customization capabilities of Knowledge Bases for HAQM Bedrock to incorporate their data processing workflow and custom logic to create a custom chunking mechanism that enhances the performance of Retrieval Augmented Generation (RAG) and unlock the potential of your PDF data.
Create an end-to-end serverless digital assistant for semantic search with HAQM Bedrock
With the rise of generative artificial intelligence (AI), an increasing number of organizations use digital assistants to have their end-users ask domain-specific questions, using Retrieval Augmented Generation (RAG) over their enterprise data sources. As organizations transition from proofs of concept to production workloads, they establish objectives to run and scale their workloads with minimal operational […]
Deploy a Slack gateway for HAQM Bedrock
In today’s fast-paced digital world, streamlining workflows and boosting productivity are paramount. That’s why we’re thrilled to share an exciting integration that will take your team’s collaboration to new heights. Get ready to unlock the power of generative artificial intelligence (AI) and bring it directly into your Slack workspace. Imagine the possibilities: Quick and efficient […]
Scalable intelligent document processing using HAQM Bedrock
In today’s data-driven business landscape, the ability to efficiently extract and process information from a wide range of documents is crucial for informed decision-making and maintaining a competitive edge. However, traditional document processing workflows often involve complex and time-consuming manual tasks, hindering productivity and scalability. In this post, we discuss an approach that uses the […]