AWS Database Blog

Category: Advanced (300)

Monitoring your HAQM Aurora PostgreSQL-Compatible and HAQM RDS PostgreSQL from integer sequence overflow

In this post, we discuss integer sequence overflow, its causes, and—most importantly—how to efficiently set up alerts using HAQM SNS and use AWS Lambda to resolve such issues in HAQM Aurora PostgreSQL-Compatible Edition and HAQM RDS for PostgreSQL.

Querying and writing to MySQL and MariaDB from HAQM Aurora and HAQM RDS for PostgreSQL using the mysql_fdw extension, Part 2: Handling foreign objects

In this post, we focus on working with the features of mysql_fdw PostgreSQL extension on HAQM RDS for PostgreSQL to help manage a large set of data that on an external database scenarios. It enables you to interact with your MySQL database for importing individual/large/selectively number of objects at the schema level and simplifying how we get information about the MySQL/MariaDB schema, to make it easier to ultimately read/write data. We will also provide an introduction to understand query performance on foreign tables.

Dynamic data masking in HAQM RDS for PostgreSQL, HAQM Aurora PostgreSQL, and Babelfish for Aurora PostgreSQL

There are a variety of different techniques available to support data masking in databases, each with their trade-offs. In this post, we explore dynamic data masking, a technique that returns anonymized data from a query without modifying the underlying data. In this post, we discuss a dynamic data masking technique based on dynamic masking views. These views mask personally identifiable information (PII) columns for unauthorized users. This post discusses how to implement this technique in HAQM RDS for PostgreSQL and HAQM Aurora PostgreSQL including Babelfish for Aurora PostgreSQL.

Improve HAQM Timestream for InfluxDB security posture by automating rotation for long-lived credentials

In this post, we walk you through how to make your HAQM Timestream for InfluxDB deployments more secure by offering a mechanism to automatically rotate long-lived credentials. We use AWS Secrets Manager to store your tokens and user credentials as secrets and rotate the secrets using the included AWS Lambda functions.

Comparison of test_decoding and pglogical plugins in HAQM Aurora PostgreSQL for data migration using AWS DMS

In this post, we provide details on two PostgreSQL plugins available for use by AWS DMS. We compare these plugin options and share test results to help database administrators understand the best practices and benefits of each plugin when working on migrations.

Optimize HAQM RDS performance with io2 Block Express storage for production workloads

Choosing the right storage configuration that meets performance requirements is a common challenge when creating and managing database instances. In this post, we provide an end-to-end guide for what storage class to choose depending on your use case. In addition, we compare the performance of different storage volumes on open source engines supported by HAQM RDS, to validate them from a database-centric perspective.

Best practices for creating a VPC for HAQM RDS for Db2

You can create an HAQM RDS for Db2 instance by using the AWS Management Console, AWS Command Line Interface (AWS CLI), AWS CloudFormation, Terraform by Hashicorp, AWS Lambda functions, or other methods. One of the prerequisites for creating an RDS for Db2 instance is to configure the virtual private cloud (VPC) appropriately. This post shows how to create a VPC with best practices for any HAQM RDS database in general and HAQM RDS for Db2 in particular through a one-click automated deployment.

How the HAQM TimeHub team designed a recovery and validation framework for their data replication framework: Part 4

With AWS DMS, you can use data validation to make sure your data was migrated accurately from the source to the target. If you enable validation for a task, AWS DMS begins comparing the source and target data immediately after a full load is performed for a table. In this post, we describe the custom framework we built on top of AWS DMS validation tasks to maintain data integrity as part of the ongoing replication between source and target databases.

How the HAQM TimeHub team handled disruption in AWS DMS CDC task caused by Oracle RESETLOGS: Part 3

In How the HAQM TimeHub team designed resiliency and high availability for their data replication framework: Part 2, we covered different scenarios handling replication failures at the source database (Oracle), AWS DMS, and target database (HAQM Aurora PostgreSQL-Compatible Edition). As part of our resilience scenario testing, when there was a failover between the Oracle primary database instance and primary standby instances, and the database opened up with RESETLOGS, AWS DMS couldn’t automatically read the new set of logs in case of a new incarnation. In this post, we dive deep into the solution the HAQM TimeHub team used for detecting such a scenario and recovering from it. We then describe the post-recovery steps to validate and correct data discrepancies caused due to the failover scenario.