AWS Database Blog

Category: Analytics

How HAQM Finance Automation built an operational data store with AWS purpose built databases to power critical finance applications

In this post, we discuss how the HAQM Finance Automation team used AWS purpose built databases, such as HAQM DynamoDB, HAQM OpenSearch Service, and HAQM Neptune together coupled with serverless compute like AWS Lambda to build an Operational Data Store (ODS) to store financial transactional data and support FinOps applications with millisecond latency. This data is the key enabler for FinOps business.

Improve cost visibility of an HAQM RDS multi-tenant instance with Performance Insights and HAQM Athena

In this post we introduce a solution that addresses a common challenge faced by many customers: managing costs in multi-tenant applications, particularly for shared databases in HAQM Relational Database Service (HAQM RDS) and HAQM Aurora. This solution uses HAQM RDS Performance Insights and AWS Cost and Usage Reports (CUR) to addresses this challenge. This allows for efficient grouping of tenants within the same RDS or Aurora instances, while helping you implement accurate chargeback models, optimize resource-intensive workloads, and make data-driven decisions for capacity planning.

Gather organization-wide HAQM RDS orphan snapshot insights using AWS Step Functions and HAQM QuickSight

In this post, we walk you through a solution to aggregate RDS orphan snapshots across accounts and AWS Regions, enabling automation and organization-wide visibility to optimize cloud spend based on data-driven insights. Cross-region copied snapshots, Aurora cluster copied snapshots and shared snapshots are out of scope for this solution. The solution uses AWS Step Functions orchestration together with AWS Lambda functions to generate orphan snapshot metadata across your organization. Generated metadata information is stored in HAQM Simple Storage Service (HAQM S3) and transformed into an HAQM Athena table by AWS Glue. HAQM QuickSight uses the Athena table to generate orphan snapshot insights.

How Skello uses AWS DMS to synchronize data from a monolithic application to microservices

Skello is a human resources (HR) software-as-a-service (SaaS) platform that focuses on employee scheduling and workforce management. It caters to various sectors, including hospitality, retail, healthcare, construction, and industry. In this post, we show how Skello uses AWS Database Migration Service (AWS DMS) to synchronize data from an monolithic architecture to microservices and perform data ingestion from the monolithic architecture and microservices to our data lake.

How Channel Corporation modernized their architecture with HAQM DynamoDB, Part 2: Streams

Channel Corporation is a B2B software as a service (SaaS) startup that operates the all-in-one artificial intelligence (AI) messenger Channel Talk. In Part 1 of this series, we introduced our motivation for NoSQL adoption, technical problems with business growth, and considerations for migration from PostgreSQL to HAQM DynamoDB. In this post, we share our experience integrating with other services to solve areas that couldn’t be addressed with DynamoDB alone.

Build a streaming ETL pipeline on HAQM RDS using HAQM MSK

Customers who host their transactional database on HAQM Relational Database Service (HAQM RDS) often seek architecture guidance on building streaming extract, transform, load (ETL) pipelines to destination targets such as HAQM Redshift. This post outlines the architecture pattern for creating a streaming data pipeline using HAQM Managed Streaming for Apache Kafka (HAQM MSK). HAQM MSK offers a fully managed Apache Kafka service, enabling you to ingest and process streaming data in real time.

Modernize your legacy databases with AWS data lakes, Part 1: Migrate SQL Server using AWS DMS

This is a three-part series in which we discuss the end-to-end process of building a data lake from a legacy SQL Server database. In this post, we show you how to build data pipelines to replicate data from Microsoft SQL Server to a data lake in HAQM S3 using AWS DMS. You can extend the solution presented in this post to other database engines like PostgreSQL, MySQL, and Oracle.

HAQM Aurora PostgreSQL zero-ETL integration with HAQM Redshift is generally available

In this post, we discuss the challenges with traditional data analytics mechanisms, our approach to solve them, and how you can use HAQM Aurora PostgreSQL-Compatible Edition zero-ETL integration with HAQM Redshift, which is generally available as of October 15th, 2024.

Vector search for HAQM DynamoDB with zero ETL for HAQM OpenSearch Service

As organizations increasingly rely on HAQM DynamoDB for their operational database needs, the demand for advanced data insights and enhanced search capabilities continues to grow. Leveraging the power of HAQM OpenSearch Service and HAQM Bedrock, you can now unlock generative artificial intelligence (AI) capabilities for your DynamoDB data. In this post, we show how you […]

How Prisma Cloud built Infinity Graph using HAQM Neptune and HAQM OpenSearch Service

Palo Alto Network’s Prisma Cloud is a leading cloud security platform protecting enterprise cloud adoption from code to cloud workflows. Palo Alto Networks chose HAQM Neptune Database and HAQM OpenSearch Service as the core services to power its Infinity Graph. In this post, we discuss the scale Palo Alto Networks requires from these core services and how we were able to design a solution to meet these needs. We focus on the Neptune design decisions and benefits, and explain how OpenSearch Service fits into the design without diving into implementation details.