AWS Database Blog
Category: Intermediate (200)
Automate cross-account backup of HAQM RDS for Oracle including database parameter groups, option groups and security groups
In this post, we showcase AWS Backup and CloudFormation support feature of AWS Backup to automate the backup of HAQM RDS for Oracle, including customized database resources such as database parameter group, option group, and security group across AWS accounts.
How PayU uses HAQM Keyspaces (for Apache Cassandra) as a feature store
PayU provides payment gateway solutions to online businesses through its award-winning technology and has empowered over 500 thousand businesses, including the country’s leading enterprises, e-commerce giants, and SMBs, to process millions of transactions daily. In this post, we outline how at PayU, we use HAQM Keyspaces (for Apache Cassandra) as the feature store for real-time, low-latency inference in the payment flow.
How Scopely scaled “MONOPOLY GO!” for millions of players around the globe with HAQM DynamoDB
In this post, we show you how HAQM DynamoDB enabled Scopely to quickly respond to their rapid growth with consistent game performance and availability. We also describe how Scopely improved the availability and performance of their matchmaking service with DynamoDB after facing challenges at scale with other solutions.
Unlock the power of parallel indexing in HAQM DocumentDB
Parallel indexing in HAQM DocumentDB (with MongoDB compatibility) significantly reduces the time to create indexes. In this post, we show you how parallel indexing works, its benefits, and best practices for implementation.
Privileged Database User Activity Monitoring using Database Activity Streams(DAS) and HAQM OpenSearch Service
In this post, we demonstrate how to create a centralized monitoring solution using Database Activity Streams and HAQM OpenSearch Service to meet audit requirements. The solution enables the security team to gather audit data from several Kinesis data streams, enrich, process, and store it with retention to meet compliance requirements, and produce relevant alarms and dashboards.
Optimize costs with scheduled scaling of HAQM DocumentDB for read workloads
In this post, we show you two ways to schedule the scaling of your HAQM DocumentDB instance-based clusters to address anticipated read traffic patterns. By aligning your HAQM DocumentDB cluster scaling operations with the anticipated read traffic patterns, you can achieve optimal performance during peak loads and save costs by reducing the need to overprovision your cluster.
Introducing the Advanced Python Wrapper Driver for HAQM Aurora
Building upon our work with the Advanced JDBC (Java Database Connectivity) Wrapper Driver, we are continuing to enhance the scalability and resiliency of today’s modern applications that are built with Python. The Advanced Python Wrapper Driver has been released as an open-source project under the Apache 2.0 License. You can find the project on GitHub. In this post, we provide details on how to use some of the features of the Advanced Python Wrapper Driver.
Upgrade HAQM RDS for SQL Server 2014 to a newer supported version using the AWS CLI
As SQL Server 2014 approaches its end of support on July 9, 2024, it’s crucial to understand your options and take a proactive approach in planning and upgrading your SQL Server databases to the latest version. In this post we show you how to leverage AWS Command Line Interface (AWS CLI) automation to upgrade your current RDS for SQL Server 2014 instance to a more recent supported version.
Exploring new features of Apache TinkerPop 3.7.x in HAQM Neptune
HAQM Neptune 1.3.2.0 now supports the Apache TinkerPop 3.7.x release line, introducing many major new features and improvements. In this post, we highlight the features that have the greatest impact on Gremlin developers using Neptune, to help you understand the implications of upgrading to these versions of Neptune and TinkerPop.
Import HAQM RDS Enhanced Monitoring metrics into HAQM CloudWatch
In this post, we show you how to import multiple Enhanced Monitoring metrics to CloudWatch and use the full capabilities of CloudWatch on those metrics.