AWS HPC Blog
Category: Best Practices
Characteristics of financial services HPC workloads in the cloud
This blog post will explore the technical attributes of computationally demanding high performance computing (HPC) workloads within the financial services sector. By examining the key characteristics of your workloads, we will guide you through a decision tree approach to help determine the most suitable HPC platform for the cloud – whether it be a commercial vendor solution, open-source option, or a fully cloud-native implementation.
The frugal HPC architect – ensuring effective FinOps for HPC workloads at scale
Running High Performance Computing workloads in AWS offers immense scale and flexibility, but many on-premises approaches to cost management don’t apply in the cloud. In this post we explore the key levers to reducing unit costs, understanding consumption, and how efficiency and effectiveness are key measures of success.
Adding functionality to your applications using multiple containers in AWS Batch
Discover how to coordinate multiple applications in separate containers within a single AWS Batch job definition. Learn the benefits of this approach and how to share resources between containers for more efficient, scalable deployments.
Enhancing Equity Strategy Backtesting with Synthetic Data: An Agent-Based Model Approach – part 2
Developing robust investment strategies requires thorough testing, but relying solely on historical data can introduce biases and limit your insights. Learn how synthetic data from agent-based models can provide an unbiased testbed to systematically evaluate your strategies and prepare for future market scenarios. Part 2 covers implementation details and results.
Enhancing Equity Strategy Backtesting with Synthetic Data: An Agent-Based Model Approach
Developing robust investment strategies requires thorough testing, but relying solely on historical data can introduce biases and limit your insights. Learn how synthetic data from agent-based models can provide an unbiased testbed to systematically evaluate your strategies and prepare for future market scenarios. Part 1 of 2 covers the theoretical foundations of the approach.
Using the Terraform AWS Cloud Control provider for managing AWS Batch resources
The Terraform AWS Cloud Control (AWSCC) provider now supports AWS Batch job definitions, enabling you to leverage recent and future enhancements to AWS Batch. Learn more in our latest blog post.
Adding configurable namespaces, persistent volume claims, and other features for AWS Batch on HAQM EKS
Exciting updates to AWS Batch on HAQM EKS! Configurable namespaces, persistent volume claims, and more. Check out our blog post to see how these features can help manage your complex containerized workloads.
Building a secure and compliant HPC environment on AWS following NIST SP 800-223
Check out our latest blog post to learn how AWS enables building secure, compliant high performance computing (HPC) environments aligned with NIST SP 800-223 guidelines. We walk through the key components, security considerations, and steps for deploying a zone-based HPC architecture on AWS.
Improve engineering productivity using AWS Engineering License Management
This post was contributed by Eran Brown, Principal Engagement Manager, Prototyping Team, Vedanth Srinivasan, Head of Solutions, Engineering & Design, Edmund Chute, Specialist SA, Solution Builder, Priyanka Mahankali, Senior specialist SA, Emerging Domains For engineering companies, the cost of Computer Aided Design and Engineering (CAD/CAE) tools can as high as 20% of product development cost. […]
Optimizing compute-intensive tasks on AWS
Optimizing workloads for performance and cost-effectiveness is crucial for businesses of all sizes – and especially helpful for workloads in the cloud, where there are a lot of levers you can pull to tune how things run. AWS offers a vast array of instance types in HAQM Elastic Compute Cloud (HAQM EC2) – each with […]