AWS Big Data Blog

Category: Artificial Intelligence

Improve search results for AI using HAQM OpenSearch Service as a vector database with HAQM Bedrock

In this post, you’ll learn how to use OpenSearch Service and HAQM Bedrock to build AI-powered search and generative AI applications. You’ll learn about how AI-powered search systems employ foundation models (FMs) to capture and search context and meaning across text, images, audio, and video, delivering more accurate results to users. You’ll learn how generative AI systems use these search results to create original responses to questions, supporting interactive conversations between humans and machines.

foundational planes

Foundational blocks of HAQM SageMaker Unified Studio: An admin’s guide to implement unified access to all your data, analytics, and AI

In this post, we discuss the foundational building blocks of SageMaker Unified Studio and how, by abstracting complex technical implementations behind user-friendly interfaces, organizations can maintain standardized governance while enabling efficient resource management across business units. This approach provides consistency in infrastructure deployment while providing the flexibility needed for diverse business requirements.

Use DeepSeek with HAQM OpenSearch Service vector database and HAQM SageMaker

OpenSearch Service provides rich capabilities for RAG use cases, as well as vector embedding-powered semantic search. You can use the flexible connector framework and search flow pipelines in OpenSearch to connect to models hosted by DeepSeek, Cohere, and OpenAI, as well as models hosted on HAQM Bedrock and SageMaker. In this post, we build a connection to DeepSeek’s text generation model, supporting a RAG workflow to generate text responses to user queries.

How EUROGATE established a data mesh architecture using HAQM DataZone

In this post, we show you how EUROGATE uses AWS services, including HAQM DataZone, to make data discoverable by data consumers across different business units so that they can innovate faster. Two use cases illustrate how this can be applied for business intelligence (BI) and data science applications, using AWS services such as HAQM Redshift and HAQM SageMaker.

Cost Optimized Vector Database: Introduction to HAQM OpenSearch Service quantization techniques

This blog post introduces a new disk-based vector search approach that allows efficient querying of vectors stored on disk without loading them entirely into memory. By implementing these quantization methods, organizations can achieve compression ratios of up to 64x, enabling cost-effective scaling of vector databases for large-scale AI and machine learning applications.

Enhancing Search Relevancy with Cohere Rerank 3.5 and HAQM OpenSearch Service

In this blog post, we’ll dive into the various scenarios for how Cohere Rerank 3.5 improves search results for best matching 25 (BM25), a keyword-based algorithm that performs lexical search, in addition to semantic search. We will also cover how businesses can significantly improve user experience, increase engagement, and ultimately drive better search outcomes by implementing a reranking pipeline.

Recap of HAQM Redshift key product announcements in 2024

HAQM Redshift made significant strides in 2024, that enhanced price-performance, enabled data lakehouse architectures by blurring the boundaries between data lakes and data warehouses, simplified ingestion and accelerated near real-time analytics, and incorporated generative AI capabilities to build natural language-based applications and boost user productivity. This blog post provides a comprehensive overview of the major product innovations and enhancements made to HAQM Redshift in 2024.