AWS News Blog
Category: Artificial Intelligence
Introducing HAQM Nova foundation models: Frontier intelligence and industry leading price performance
HAQM Nova foundation models deliver frontier intelligence and industry leading price-performance, with support for text and multimodal intelligence, multimodal fine-tuning, and high-quality images and videos.
Introducing multi-agent collaboration capability for HAQM Bedrock (preview)
With multi-agent collaboration on HAQM Bedrock, developers can build, deploy, and manage multiple specialized agents working together seamlessly to tackle more intricate, multi-step workflows.
Prevent factual errors from LLM hallucinations with mathematically sound Automated Reasoning checks (preview)
Enhance conversational AI accuracy with Automated Reasoning checks – first and only gen AI safeguard that helps reduce hallucinations by encoding domain rules into verifiable policies.
Build faster, more cost-efficient, highly accurate models with HAQM Bedrock Model Distillation (preview)
Easily transfer knowledge from a large, complex model to a smaller one.
HAQM EC2 Trn2 Instances and Trn2 UltraServers for AI/ML training and inference are now available
With 4x faster speed, 4x more memory bandwidth, 3x higher memory capacity than predecessors, and 30% higher floating-point operations, these instances deliver unprecedented compute power for ML training and gen AI.
Enhance your productivity with new extensions and integrations in HAQM Q Business
Seamlessly access AI assistance within work applications with HAQM Q Business’s new browser extensions and integrations.
New RAG evaluation and LLM-as-a-judge capabilities in HAQM Bedrock
Evaluate AI models and applications efficiently with HAQM Bedrock’s new LLM-as-a-judge capability for model evaluation and RAG evaluation for Knowledge Bases, offering a variety of quality and responsible AI metrics at scale.
New APIs in HAQM Bedrock to enhance RAG applications, now available
With custom connectors and reranking models, you can enhance RAG applications by enabling direct ingestion to knowledge bases without requiring a full sync, and improving response relevance through advanced reranking models.