AWS Machine Learning Blog

Category: Analytics

Solution workflow

Implement semantic video search using open source large vision models on HAQM SageMaker and HAQM OpenSearch Serverless

In this post, we demonstrate how to use large vision models (LVMs) for semantic video search using natural language and image queries. We introduce some use case-specific methods, such as temporal frame smoothing and clustering, to enhance the video search performance. Furthermore, we demonstrate the end-to-end functionality of this approach by using both asynchronous and real-time hosting options on HAQM SageMaker AI to perform video, image, and text processing using publicly available LVMs on the Hugging Face Model Hub. Finally, we use HAQM OpenSearch Serverless with its vector engine for low-latency semantic video search.

Using HAQM OpenSearch ML connector APIs

OpenSearch offers a wide range of third-party machine learning (ML) connectors to support this augmentation. This post highlights two of these third-party ML connectors. The first connector we demonstrate is the HAQM Comprehend connector. In this post, we show you how to use this connector to invoke the LangDetect API to detect the languages of ingested documents. The second connector we demonstrate is the HAQM Bedrock connector to invoke the HAQM Titan Text Embeddings v2 model so that you can create embeddings from ingested documents and perform semantic search.

Revolutionizing earth observation with geospatial foundation models on AWS

In this post, we explore how a leading GeoFM (Clay Foundation’s Clay foundation model available on Hugging Face) can be deployed for large-scale inference and fine-tuning on HAQM SageMaker.

A generative AI prototype with HAQM Bedrock transforms life sciences and the genome analysis process

This post explores deploying a text-to-SQL pipeline using generative AI models and HAQM Bedrock to ask natural language questions to a genomics database. We demonstrate how to implement an AI assistant web interface with AWS Amplify and explain the prompt engineering strategies adopted to generate the SQL queries. Finally, we present instructions to deploy the service in your own AWS account.

Build a financial research assistant using HAQM Q Business and HAQM QuickSight for generative AI–powered insights

In this post, we show you how HAQM Q Business can help augment your generative AI needs in all the abovementioned use cases and more by answering questions, providing summaries, generating content, and securely completing tasks based on data and information in your enterprise systems.

Responsible AI in action: How Data Reply red teaming supports generative AI safety on AWS

In this post, we explore how AWS services can be seamlessly integrated with open source tools to help establish a robust red teaming mechanism within your organization. Specifically, we discuss Data Reply’s red teaming solution, a comprehensive blueprint to enhance AI safety and responsible AI practices.

Combine keyword and semantic search for text and images using HAQM Bedrock and HAQM OpenSearch Service

In this post, we walk you through how to build a hybrid search solution using OpenSearch Service powered by multimodal embeddings from the HAQM Titan Multimodal Embeddings G1 model through HAQM Bedrock. This solution demonstrates how you can enable users to submit both text and images as queries to retrieve relevant results from a sample retail image dataset.

solution overview

Stream ingest data from Kafka to HAQM Bedrock Knowledge Bases using custom connectors

For this post, we implement a RAG architecture with HAQM Bedrock Knowledge Bases using a custom connector and topics built with HAQM Managed Streaming for Apache Kafka (HAQM MSK) for a user who may be interested to understand stock price trends.