AWS Database Blog
Explore the new openCypher custom functions and subquery support in HAQM Neptune
In this post, we describe some of the openCypher features that have been released as part of the 1.4.2.0 engine update to HAQM Neptune. Neptune provides developers with the choice of building their graph applications using three open graph query languages: openCypher, Apache TinkerPop Gremlin, and the World Wide Web Consortium’s (W3C) SPARQL 1.1. You can use the guide at the end of this post to try out the new features that are described.
Connect HAQM Bedrock Agents with HAQM Aurora PostgreSQL using HAQM RDS Data API
In this post, we describe a solution to integrate generative AI applications with relational databases like HAQM Aurora PostgreSQL-Compatible Edition using RDS Data API (Data API) for simplified database interactions, HAQM Bedrock for AI model access, HAQM Bedrock Agents for task automation and HAQM Bedrock Knowledge Bases for context information retrieval.
Run SQL Server post-migration activities using Cloud Migration Factory on AWS
In this post, we show you essential post-migration tasks to perform after migrating your SQL Server database to HAQM EC2 and how to automate this activity by using Cloud Migration Factory on AWS (CMF), such as validating database status, configuring performance settings, and running consistency checks. Additionally, we explore how the CMF solution can automate these essential tasks, providing efficiency, scalability, and heightened visibility to simplify and expedite your migration process.
HAQM Aurora Global Database introduces support for up to 10 secondary Regions
In this post, we dive deep into HAQM Aurora Global Database’s new support for up to 10 secondary Regions and explore use cases it unlocks. An Aurora Global Database consists of one primary Region and up to 10 read-only secondary Regions for low-latency local reads.
How to configure a Linked Server between HAQM RDS for SQL Server and Teradata database
In this post, we demonstrate how to configure a linked server between HAQM RDS for SQL Server and a Teradata database instance. We guide you through the step-by-step process to establish this connection and show you how to verify its functionality.
Achieve up to 1.7 times higher write throughput and 1.38 times better price performance with HAQM Aurora PostgreSQL on AWS Graviton4-based R8g instances
In this post, we demonstrate how upgrading to Graviton4-based R8g instances with Aurora PostgreSQL-Compatible 17.4 on Aurora I/O-Optimized cluster configuration can deliver significant price-performance gains – delivering up to 1.7 times higher write throughput, 1.38 times better price-performance and reducing commit latency by up to 46% on r8g.16xlarge instances and 38% on r8g.2xlarge instances as compared to Graviton2-based R6g instances.
How HAQM maintains accurate totals at scale with HAQM DynamoDB
HAQM’s Finance Technologies Tax team (FinTech Tax) manages mission-critical services for tax computation, deduction, remittance, and reporting across global jurisdictions. The Application processes billions of transactions annually across multiple international marketplaces. In this post, we show how the team implemented tiered tax withholding using HAQM DynamoDB transactions and conditional writes.
Build an AI-powered text-to-SQL chatbot using HAQM Bedrock, HAQM MemoryDB, and HAQM RDS
Text-to-SQL can automatically transform analytical questions into executable SQL code for enhanced data accessibility and streamlined data exploration, from analyzing sales data and monitoring performance metrics to assessing customer feedback. In this post, we explore how to use HAQM Relational Database Service (HAQM RDS) for PostgreSQL and HAQM Bedrock to build a generative AI text-to-SQL chatbot application using Retrieval Augmented Generation (RAG). We’ll also see how we can use HAQM MemoryDB with vector search to provide semantic caching to further accelerate this solution.
HAQM DynamoDB data modeling for Multi-tenancy – Part 3
In this series of posts, we walk through the process of creating a DynamoDB data model using an example multi-tenant application, a customer issue tracking service. The goal of this series is to explore areas that are important for decision-making and provide insights into the influences to help you plan your data model for a multi-tenant application. In this last part of the series, we explore how to validate the chosen data model from both a performance and a security perspective. Additionally, we cover how to extend the data model as new access patterns and requirements arise.
HAQM DynamoDB data modeling for Multi-Tenancy – Part 2
In this series of posts, we walk through the process of creating a DynamoDB data model using an example multi-tenant application, a customer issue tracking service. The goal of this series is to explore areas that are important for decision-making and provide insights into the influences to help you plan your data model for a multi-tenant application. In this post, we continue the design process, selecting a partition key design and creating our data schema. We also show how to implement the access patterns using the AWS Command Line Interface (AWS CLI).