Choose from leading FMs
HAQM Bedrock makes building with a range of foundation models (FMs) as straightforward as an API call. HAQM Bedrock provides access to leading models including AI21 Labs' Jurassic, Anthropic's Claude, Cohere's Command and Embed, Meta's Llama 2, and Stability AI's Stable Diffusion, as well as our own HAQM Titan models. With HAQM Bedrock, you can select the FM that is best suited for your use case and application requirements.

Experiment with FMs for different tasks
Experiment with different FMs using interactive playgrounds for various modalities including text, chat, and image. The playgrounds allow you to try out various models for your use case to get a feel for the model’s suitability for a given task.

Evaluate FMs to select the best one for your use case
Model Evaluation on HAQM Bedrock allows you to use automatic and human evaluations to select FMs for a specific use case. Automatic model evaluation uses curated datasets and provides predefined metrics including accuracy, robustness, and toxicity. For subjective metrics, you can use HAQM Bedrock to set up a human evaluation workflow in a few quick steps. With human evaluations, you can bring your own datasets and define custom metrics, such as relevance, style, and alignment to brand voice. Human evaluation workflows can use your own employees as reviewers or you can engage a team managed by AW to perform the human evaluation, where AWS hires skilled evaluators and manages the complete workflow on your behalf. To learn more, read the blog.

Privately customize FMs with your data
In a few quick steps, HAQM Bedrock lets you go from generic models to ones that are specialized and customized for your business and use case. To adapt an FM for a specific task, you can use a technique called fine-tuning. Point to a few labeled examples in HAQM Simple Storage Service (HAQM S3), and HAQM Bedrock makes a copy of the base model, trains it with your data, and creates a fine-tuned model accessible only to you, so you get customized responses. Fine-tuning is available for Command, Llama 2, HAQM Titan Text Lite and Express, HAQM Titan Image Generator, and HAQM Titan Multimodal Embeddings models. A second way you can adapt HAQM Titan Text Lite and HAQM Titan Express FMs in HAQM Bedrock is with continued pretraining, a technique that uses your unlabeled datasets to customize the FM for your domain or industry. With both fine-tuning and continued pretraining, HAQM Bedrock creates a private, customized copy of the base FM for you, and your data is not used to train the original base models. Your data used to customize models is securely transferred through your HAQM Virtual Private Cloud (HAQM VPC). To learn more, read the blog.

Converse API
Converse API provides developers a consistent way to invoke HAQM Bedrock models removing the complexity to adjust for model-specific differences such as inference parameters.
Bidirectional Streaming API
The Bidirectional Streaming API enables simultaneous data exchange between client and server, creating truly natural conversational experiences with models like HAQM Nova Sonic. This powerful combination delivers seamless audio-text integration, allowing your applications to listen and respond in real time—complete with natural interjections and fluid interactions. Developers can seamlessly integrate audio, text, and video interactions with minimal latency and maximum context preservation, removing the complexity of managing multi-modal communication streams.