Containers
Tag: HAQM CloudWatch
Running Windows Containers with HAQM ECS on AWS Fargate
At AWS, customers are running their most mission-critical workloads on HAQM Elastic Container Service (HAQM ECS) with Windows as their compute layer. Still, the undifferentiated heavy lifting of managing the underlying host OS, patching, scaling, and hardening when running Windows containers are time-consuming tasks. Therefore, customers can choose to use the optimized AMIs, which are preconfigured […]
Introducing CloudWatch Container Insights Prometheus Support with AWS Distro for OpenTelemetry on HAQM ECS and HAQM EKS
You can use CloudWatch Container Insights to monitor, troubleshoot, and alarm on your containerized applications and microservices. HAQM CloudWatch collects, aggregates, and summarizes compute utilization information like CPU, memory, disk, and network data. It also helps you isolate issues and resolve them quickly by providing diagnostic information like container restart failures. Container Insights gives you […]
Autoscaling HAQM ECS services based on custom CloudWatch and Prometheus metrics
Introduction Horizontal scalability is a critical aspect of cloud native applications. Microservices deployed to HAQM ECS leverage the Application Auto Scaling service to automatically scale based on observed metrics data. HAQM ECS measures service utilization based on CPU and memory resources consumed by the tasks that belong to a service and publishes CloudWatch metrics, namely, […]
Create a pipeline with canary deployments for HAQM EKS with AWS App Mesh
NOTICE: October 04, 2024 – This post no longer reflects the best guidance for configuring a service mesh with HAQM EKS and its examples no longer work as shown. Please refer to newer content on HAQM VPC Lattice. ——– In this post, we will demonstrate how customers can leverage different AWS services in conjunction with […]
Autoscaling HAQM EKS services based on custom Prometheus metrics using CloudWatch Container Insights
Introduction In a Kubernetes cluster, the Horizontal Pod Autoscaler can automatically scale the number of Pods in a Deployment based on observed CPU utilization and memory usage. The autoscaler depends on the Kubernetes metrics server, which collects resource metrics from Kubelets and exposes them in Kubernetes API server through Metrics API. The metrics server has […]
CI/CD pipeline for testing containers on AWS Fargate with scaling to zero
Development teams are running manual and automated tests several times a day for their feature branches. Running tests locally is only one part of the process. To test workloads against other systems as well as give access to QA engineers, it requires deploying code to dedicated environments. These servers/VMs spend hours idling because new test […]