AWS News Blog
Stable Diffusion 3.5 Large is now available in HAQM Bedrock
Unleash your creativity: Stable Diffusion 3.5 Large in HAQM Bedrock generates stunning high-resolution images with superior detail, style variety, and prompt adherence for accelerated visual content creation.
New HAQM EC2 High Memory U7inh instance on HPE Server for large in-memory databases
Leverage 1920 vCPUs and 32TB memory with high-performance U7inh instances from AWS, powered by Intel Xeon Scalable processors; seamlessly migrate SAP HANA and other mission-critical workloads while benefiting from cloud scalability and cost savings.
Accelerate foundation model training and fine-tuning with new HAQM SageMaker HyperPod recipes
HAQM SageMaker HyperPod recipes help customers get started with training and fine-tuning popular publicly available foundation models, like Llama 3.1 405B, in just minutes with state-of-the-art performance.
Meet your training timelines and budgets with new HAQM SageMaker HyperPod flexible training plans
Unlock efficient large model training with SageMaker HyperPod flexible training plans – find optimal compute resources and complete training within timelines and budgets.
Maximize accelerator utilization for model development with new HAQM SageMaker HyperPod task governance
Enable priority-based resource allocation, fair-share utilization, and automated task preemption for optimal compute utilization across teams.
New HAQM Q Developer agent capabilities include generating documentation, code reviews, and unit tests
Enhancing coding productivity, HAQM Q Developer agents now offer capabilities for auto-generating documentation, conducting code reviews, and creating unit tests within IDEs and GitLab.
Build faster, more cost-efficient, highly accurate models with HAQM Bedrock Model Distillation (preview)
Easily transfer knowledge from a large, complex model to a smaller one.
New HAQM EC2 P5en instances with NVIDIA H200 Tensor Core GPUs and EFAv3 networking
HAQM EC2 P5en instances deliver up to 3,200 Gbps network bandwidth with EFAv3 for accelerating deep learning, generative AI, and HPC workloads with unmatched efficiency.