AWS Database Blog
Best practices for creating a VPC for HAQM RDS for Db2
You can create an HAQM RDS for Db2 instance by using the AWS Management Console, AWS Command Line Interface (AWS CLI), AWS CloudFormation, Terraform by Hashicorp, AWS Lambda functions, or other methods. One of the prerequisites for creating an RDS for Db2 instance is to configure the virtual private cloud (VPC) appropriately. This post shows how to create a VPC with best practices for any HAQM RDS database in general and HAQM RDS for Db2 in particular through a one-click automated deployment.
How the HAQM TimeHub team designed a recovery and validation framework for their data replication framework: Part 4
With AWS DMS, you can use data validation to make sure your data was migrated accurately from the source to the target. If you enable validation for a task, AWS DMS begins comparing the source and target data immediately after a full load is performed for a table. In this post, we describe the custom framework we built on top of AWS DMS validation tasks to maintain data integrity as part of the ongoing replication between source and target databases.
How the HAQM TimeHub team handled disruption in AWS DMS CDC task caused by Oracle RESETLOGS: Part 3
In How the HAQM TimeHub team designed resiliency and high availability for their data replication framework: Part 2, we covered different scenarios handling replication failures at the source database (Oracle), AWS DMS, and target database (HAQM Aurora PostgreSQL-Compatible Edition). As part of our resilience scenario testing, when there was a failover between the Oracle primary database instance and primary standby instances, and the database opened up with RESETLOGS, AWS DMS couldn’t automatically read the new set of logs in case of a new incarnation. In this post, we dive deep into the solution the HAQM TimeHub team used for detecting such a scenario and recovering from it. We then describe the post-recovery steps to validate and correct data discrepancies caused due to the failover scenario.
How the HAQM TimeHub team designed resiliency and high availability for their data replication framework: Part 2
In How the HAQM Timehub team built a data replication framework using AWS DMS: Part 1, we covered how we built a low-latency replication solution to replicate data from an Oracle database using AWS DMS to HAQM Aurora PostgreSQL-Compatible Edition. In this post, we elaborate on our approach to address resilience of the ongoing replication between source and target databases.
Understand the benefits of physical replication in HAQM RDS for PostgreSQL Blue/Green Deployments
With the recent addition of physical replication as an option for RDS Blue/Green Deployments, you can overcome most of the limitations of logical replication. This makes physical replication particularly well-suited for use cases like minor version upgrades, schema changes (DDL operations) in the blue environment, and storage adjustments. In this post, we delve into the advantages of using physical replication in RDS for PostgreSQL blue/green deployments to simplify database operations and scale with application demands. We explore the key benefits of physical replication and provide a step-by-step guide to help you get started with this new capability.
Join your HAQM RDS for Db2 instances across accounts to a single shared domain
With HAQM RDS for Db2, you can seamlessly authenticate your users and groups with or without Kerberos authentication using a single AWS Microsoft AD directory that can serve multiple accounts. In this post, we use AWS Managed Microsoft AD from an AWS account to provide Microsoft AD authentication to HAQM RDS for Db2 in a different account.
Scaling to 70M users: How Flo Health optimized HAQM DynamoDB for cost and performance
Flo is the largest app in the Health and Fitness category worldwide, with 70 million monthly active users. In this post, we explain best practices Flo implemented to scale to more than 70 million monthly active users while achieving 60% cost efficiency with HAQM DynamoDB.
Capture data changes while restoring an HAQM DynamoDB table
This is the first post of a series dedicated to table restores and data integrity. In this post, we present a solution that automates the PITR restoration process and handles data changes that occur during the restoration, providing a fluid transition back to the restored DynamoDB table with near-zero downtime. This solution enables you to restore a DynamoDB table efficiently with minimum impact your application.
Best practices for maintenance activities in HAQM RDS for Oracle
The HAQM RDS for Oracle User Guide provides comprehensive coverage of the maintenance activities in HAQM RDS for Oracle. However, it could be cumbersome to quickly learn about the best practices around various maintenance activities in HAQM RDS for Oracle from the user guide. In this post, we describe the key maintenance activities and the best practices to be followed for each of them.
Using RDS Proxy with HAQM RDS Multi-AZ DB instance deployment to improve planned failover time
In this post, we demonstrate improvements in planned failover downtime of Multi-AZ instance deployment with HAQM RDS Proxy, a result of several optimizations made by RDS. In the event of a failure, HAQM RDS automatically switches the roles of the primary and standby instances and updates the IP address associated with the database’s DNS (hostname). This allows client applications to maintain their connection settings during failover. This process, known as DNS propagation, can take up to 35 seconds to complete. RDS Proxy eliminates the 35 seconds of DNS propagation delay by continuously monitoring both instances, allowing it to bypass DNS propagation. This allows RDS Proxy to deliver a faster failover response for client applications, maximizing availability during failovers.