AWS Database Blog
Category: HAQM Bedrock
Using generative AI and HAQM Bedrock to generate SPARQL queries to discover protein functional information with UniProtKB and HAQM Neptune
In this post, we demonstrate how to use generative AI and HAQM Bedrock to transform natural language questions into graph queries to run against a knowledge graph. We explore the generation of queries written in the SPARQL query language, a well-known language for querying a graph whose data is represented as Resource Description Framework (RDF).
Integrate natural language processing and generative AI with relational databases
In this post, we present an approach to using natural language processing (NLP) to query an HAQM Aurora PostgreSQL-Compatible Edition database. The solution presented in this post assumes that an organization has an Aurora PostgreSQL database. We create a web application framework using Flask for the user to interact with the database. JavaScript and Python code act as the interface between the web framework, HAQM Bedrock, and the database.
Multi-tenant vector search with HAQM Aurora PostgreSQL and HAQM Bedrock Knowledge Bases
In this post, we discuss the fully managed approach using HAQM Bedrock Knowledge Bases to simplify the integration of the data source with your generative AI application using Aurora. HAQM Bedrock is a fully managed service that makes foundation models (FMs) from leading AI startups and HAQM available through an API, so you can choose from a wide range of FMs to find the model that is best suited for your use case.
Self-managed multi-tenant vector search with HAQM Aurora PostgreSQL
In this post, we explore the process of building a multi-tenant generative AI application using Aurora PostgreSQL-Compatible for vector storage. In Part 1 (this post), we present a self-managed approach to building the vector search with Aurora. In Part 2, we present a fully managed approach using HAQM Bedrock Knowledge Bases to simplify the integration of the data sources, the Aurora vector store, and your generative AI application.
How Iterate.ai uses HAQM MemoryDB to accelerate and cost-optimize their workforce management conversational AI agent
Iterate.ai is an enterprise AI platform company delivering innovative AI solutions to industries such as retail, finance, healthcare, and quick-service restaurants. Among its standout offerings is Frontline, a workforce management platform powered by AI, designed to support and empower Frontline workers. Available on both the Apple App Store and Google Play, Frontline uses advanced AI tools to streamline operational efficiency and enhance communication among dispersed workforces. In this post, we give an overview of durable semantic caching in HAQM MemoryDB, and share how Iterate used this functionality to accelerate and cost-optimize Frontline.
Accelerate your generative AI application development with HAQM Bedrock Knowledge Bases Quick Create and HAQM Aurora Serverless
In this post, we look at two capabilities in HAQM Bedrock Knowledge Bases that make it easier to build RAG workflows with HAQM Aurora Serverless v2 as the vector store. The first capability helps you easily create an Aurora Serverless v2 knowledge base to use with HAQM Bedrock and the second capability enables you to automate deploying your RAG workflow across environments.
Build a scalable, context-aware chatbot with HAQM DynamoDB, HAQM Bedrock, and LangChain
HAQM DynamoDB, HAQM Bedrock, and LangChain can provide a powerful combination for building robust, context-aware chatbots. In this post, we explore how to use LangChain with DynamoDB to manage conversation history and integrate it with HAQM Bedrock to deliver intelligent, contextually aware responses. We break down the concepts behind the DynamoDB chat connector in LangChain, discuss the advantages of this approach, and guide you through the essential steps to implement it in your own chatbot.
Use a DAO to govern LLM training data, Part 4: MetaMask authentication
In Part 1 of this series, we introduced the concept of using a decentralized autonomous organization (DAO) to govern the lifecycle of an AI model, focusing on the ingestion of training data. In Part 2, we created and deployed a minimalistic smart contract on the Ethereum Sepolia using Remix and MetaMask, establishing a mechanism to govern which training data can be uploaded to the knowledge base and by whom. In Part 3, we set up HAQM API Gateway and deployed AWS Lambda functions to copy data from InterPlanetary File System (IPFS) to HAQM Simple Storage Service (HAQM S3) and start a knowledge base ingestion job, creating a seamless data flow from IPFS to the knowledge base. In this post, we demonstrate how to configure MetaMask authentication, create a frontend interface, and test the solution.
Use a DAO to govern LLM training data, Part 3: From IPFS to the knowledge base
In Part 1 of this series, we introduced the concept of using a decentralized autonomous organization (DAO) to govern the lifecycle of an AI model, focusing on the ingestion of training data. In Part 2, we created and deployed a minimalistic smart contract on the Ethereum Sepolia testnet using Remix and MetaMask, establishing a mechanism to govern which training data can be uploaded to the knowledge base and by whom. In this post, we set up HAQM API Gateway and deploy AWS Lambda functions to copy data from InterPlanetary File System (IPFS) to HAQM Simple Storage Service (HAQM S3) and start a knowledge base ingestion job.
Use a DAO to govern LLM training data, Part 2: The smart contract
In Part 1 of this series, we introduced the concept of using a decentralized autonomous organization (DAO) to govern the lifecycle of an AI model, specifically focusing on the ingestion of training data. In this post, we focus on the writing and deployment of the Ethereum smart contract that contains the outcome of the DAO decisions.