AWS Big Data Blog
Category: HAQM SageMaker
Simplify data access for your enterprise using HAQM SageMaker Lakehouse
HAQM SageMaker Lakehouse offers a unified solution for enterprise data access, combining data from warehouses and lakes. This post demonstrates how SageMaker Lakehouse integrates scattered data sources, enabling secure enterprise-wide access, and allowing teams to use their preferred tools for predicting and analyzing customer churn. The solution involves multiple data sources, including HAQM S3, HAQM Redshift, and AWS Glue Data Catalog, with AWS Lake Formation managing permissions.
Author visual ETL flows on HAQM SageMaker Unified Studio
HAQM SageMaker Unified Studio (preview) provides an integrated data and AI development environment within HAQM SageMaker. This post shows how you can build a low-code and no-code (LCNC) visual ETL flow that enables seamless data ingestion and transformation across multiple data sources.
Simplify data integration with AWS Glue and zero-ETL to HAQM SageMaker Lakehouse
AWS has introduced zero-ETL integration support from external applications to AWS Glue, simplifying data integration for organizations. This new feature allows for seamless replication of data from popular platforms like Salesforce, ServiceNow, and Zendesk into HAQM SageMaker Lakehouse and HAQM Redshift. This blog post demonstrates a use case involving ServiceNow data integration, outlining the process of setting up a connector, creating a zero-ETL integration, and verifying both initial data load and change data capture (CDC). It also highlights the advantages of using Apache Iceberg for data versioning and time travel capabilities within zero-ETL integrations.
Catalog and govern HAQM Athena federated queries with HAQM SageMaker Lakehouse
In this post, we show how to connect to, govern, and run federated queries on data stored in Redshift, DynamoDB (Preview), and Snowflake (Preview). To query our data, we use Athena, which is seamlessly integrated with SageMaker Unified Studio. We use SageMaker Lakehouse to present data to end-users as federated catalogs, a new type of catalog object. Finally, we demonstrate how to use column-level security permissions in AWS Lake Formation to give analysts access to the data they need while restricting access to sensitive information.
The next generation of HAQM SageMaker: The center for all your data, analytics, and AI
This week on the keynote stages at AWS re:Invent 2024, you heard from Matt Garman, CEO, AWS, and Swami Sivasubramanian, VP of AI and Data, AWS, speak about the next generation of HAQM SageMaker, the center for all of your data, analytics, and AI. This update addresses the evolving relationship between analytics and AI workloads, aiming to streamline how customers work with their data. It helps organizations collaborate more effectively, reduce data silos, and accelerate the development of AI-powered applications while maintaining robust governance and security measures.
Integrate sparse and dense vectors to enhance knowledge retrieval in RAG using HAQM OpenSearch Service
In this post, instead of using the BM25 algorithm, we introduce sparse vector retrieval. This approach offers improved term expansion while maintaining interpretability. We walk through the steps of integrating sparse and dense vectors for knowledge retrieval using HAQM OpenSearch Service and run some experiments on some public datasets to show its advantages.
Protein similarity search using ProtT5-XL-UniRef50 and HAQM OpenSearch Service
A protein is a sequence of amino acids that, when chained together, creates a 3D structure. This 3D structure allows the protein to bind to other structures within the body and initiate changes. This binding is core to the working of many drugs. A common workflow within drug discovery is searching for similar proteins, because […]
Build a decentralized semantic search engine on heterogeneous data stores using autonomous agents
In this post, we show how to build a Q&A bot with RAG (Retrieval Augmented Generation). RAG uses data sources like HAQM Redshift and HAQM OpenSearch Service to retrieve documents that augment the LLM prompt. For getting data from HAQM Redshift, we use the Anthropic Claude 2.0 on HAQM Bedrock, summarizing the final response based on pre-defined prompt template libraries from LangChain. To get data from HAQM OpenSearch Service, we chunk, and convert the source data chunks to vectors using HAQM Titan Text Embeddings model.
Hybrid Search with HAQM OpenSearch Service
This post explains the internals of hybrid search and how to build a hybrid search solution using OpenSearch Service. We experiment with sample queries to explore and compare lexical, semantic, and hybrid search. All the code used in this post is publicly available in the GitHub repository.
Preprocess and fine-tune LLMs quickly and cost-effectively using HAQM EMR Serverless and HAQM SageMaker
Large language models (LLMs) are becoming increasing popular, with new use cases constantly being explored. In general, you can build applications powered by LLMs by incorporating prompt engineering into your code. However, there are cases where prompting an existing LLM falls short. This is where model fine-tuning can help. Prompt engineering is about guiding the […]