AWS Machine Learning Blog

Category: Best Practices

AWS Step Functions state machine for audio processing: Whisper transcription, speaker identification, and Bedrock summary tasks

Build a serverless audio summarization solution with HAQM Bedrock and Whisper

In this post, we demonstrate how to use the Open AI Whisper foundation model (FM) Whisper Large V3 Turbo, available in HAQM Bedrock Marketplace, which offers access to over 140 models through a dedicated offering, to produce near real-time transcription. These transcriptions are then processed by HAQM Bedrock for summarization and redaction of sensitive information.

Solution workflow

Implement semantic video search using open source large vision models on HAQM SageMaker and HAQM OpenSearch Serverless

In this post, we demonstrate how to use large vision models (LVMs) for semantic video search using natural language and image queries. We introduce some use case-specific methods, such as temporal frame smoothing and clustering, to enhance the video search performance. Furthermore, we demonstrate the end-to-end functionality of this approach by using both asynchronous and real-time hosting options on HAQM SageMaker AI to perform video, image, and text processing using publicly available LVMs on the Hugging Face Model Hub. Finally, we use HAQM OpenSearch Serverless with its vector engine for low-latency semantic video search.

Multi-account support for HAQM SageMaker HyperPod task governance

In this post, we discuss how an enterprise with multiple accounts can access a shared HAQM SageMaker HyperPod cluster for running their heterogenous workloads. We use SageMaker HyperPod task governance to enable this feature.

How climate tech startups are building foundation models with HAQM SageMaker HyperPod

In this post, we show how climate tech startups are developing foundation models (FMs) that use extensive environmental datasets to tackle issues such as carbon capture, carbon-negative fuels, new materials design for microplastics destruction, and ecosystem preservation. These specialized models require advanced computational capabilities to process and analyze vast amounts of data effectively.

Generative AI platform maturity stages

Architect a mature generative AI foundation on AWS

In this post, we give an overview of a well-established generative AI foundation, dive into its components, and present an end-to-end perspective. We look at different operating models and explore how such a foundation can operate within those boundaries. Lastly, we present a maturity model that helps enterprises assess their evolution path.

ml-17088-solution-architecture

Set up a custom plugin on HAQM Q Business and authenticate with HAQM Cognito to interact with backend systems

In this post, we demonstrate how to build a custom plugin with HAQM Q Business for backend integration. This plugin can integrate existing systems, including third-party systems, with little to no development in just weeks and automate critical workflows. Additionally, we show how to safeguard the solution using HAQM Cognito and AWS IAM Identity Center, maintaining the safety and integrity of sensitive data and workflows.

Build a financial research assistant using HAQM Q Business and HAQM QuickSight for generative AI–powered insights

In this post, we show you how HAQM Q Business can help augment your generative AI needs in all the abovementioned use cases and more by answering questions, providing summaries, generating content, and securely completing tasks based on data and information in your enterprise systems.

Best practices for Meta Llama 3.2 multimodal fine-tuning on HAQM Bedrock

In this post, we share comprehensive best practices and scientific insights for fine-tuning Meta Llama 3.2 multimodal models on HAQM Bedrock. By following these guidelines, you can fine-tune smaller, more cost-effective models to achieve performance that rivals or even surpasses much larger models—potentially reducing both inference costs and latency, while maintaining high accuracy for your specific use case.