AWS Machine Learning Blog
Category: HAQM Textract
Unleashing the multimodal power of HAQM Bedrock Data Automation to transform unstructured data into actionable insights
Today, we’re excited to announce the general availability of HAQM Bedrock Data Automation, a powerful, fully managed capability within HAQM Bedrock that seamlessly transforms unstructured multimodal data into structured, application-ready insights with high accuracy, cost efficiency, and scalability.
How Pattern PXM’s Content Brief is driving conversion on ecommerce marketplaces using AI
Pattern is a leader in ecommerce acceleration, helping brands navigate the complexities of selling on marketplaces and achieve profitable growth through a combination of proprietary technology and on-demand expertise. In this post, we share how Pattern uses AWS services to process trillions of data points to deliver actionable insights, optimizing product listings across multiple services.
How Travelers Insurance classified emails with HAQM Bedrock and prompt engineering
In this post, we discuss how FMs can reliably automate the classification of insurance service emails through prompt engineering. When formulating the problem as a classification task, an FM can perform well enough for production environments, while maintaining extensibility into other tasks and getting up and running quickly. All experiments were conducted using Anthropic’s Claude models on HAQM Bedrock.
How Crexi achieved ML models deployment on AWS at scale and boosted efficiency
Commercial Real Estate Exchange, Inc. (Crexi), is a digital marketplace and platform designed to streamline commercial real estate transactions. In this post, we will review how Crexi achieved its business needs and developed a versatile and powerful framework for AI/ML pipeline creation and deployment. This customizable and scalable solution allows its ML models to be efficiently deployed and managed to meet diverse project requirements.
Multilingual content processing using HAQM Bedrock and HAQM A2I
This post outlines a custom multilingual document extraction and content assessment framework using a combination of Anthropic’s Claude 3 on HAQM Bedrock and HAQM A2I to incorporate human-in-the-loop capabilities.
Accelerate performance using a custom chunking mechanism with HAQM Bedrock
This post explores how Accenture used the customization capabilities of Knowledge Bases for HAQM Bedrock to incorporate their data processing workflow and custom logic to create a custom chunking mechanism that enhances the performance of Retrieval Augmented Generation (RAG) and unlock the potential of your PDF data.
Intelligent healthcare forms analysis with HAQM Bedrock
In this post, we explore using the Anthropic Claude 3 on HAQM Bedrock large language model (LLM). HAQM Bedrock provides access to several LLMs, such as Anthropic Claude 3, which can be used to generate semi-structured data relevant to the healthcare industry. This can be particularly useful for creating various healthcare-related forms, such as patient intake forms, insurance claim forms, or medical history questionnaires.
How Deltek uses HAQM Bedrock for question and answering on government solicitation documents
This post provides an overview of a custom solution developed by the AWS Generative AI Innovation Center (GenAIIC) for Deltek, a globally recognized standard for project-based businesses in both government contracting and professional services. Deltek serves over 30,000 clients with industry-specific software and information solutions. In this collaboration, the AWS GenAIIC team created a RAG-based solution for Deltek to enable Q&A on single and multiple government solicitation documents. The solution uses AWS services including HAQM Textract, HAQM OpenSearch Service, and HAQM Bedrock.
Automate derivative confirms processing using AWS AI services for the capital markets industry
In this post, we show how you can automate and intelligently process derivative confirms at scale using AWS AI services. The solution combines HAQM Textract, a fully managed ML service to effortlessly extract text, handwriting, and data from scanned documents, and AWS Serverless technologies, a suite of fully managed event-driven services for running code, managing data, and integrating applications, all without managing servers.
Use zero-shot large language models on HAQM Bedrock for custom named entity recognition
Name entity recognition (NER) is the process of extracting information of interest, called entities, from structured or unstructured text. Manually identifying all mentions of specific types of information in documents is extremely time-consuming and labor-intensive. Some examples include extracting players and positions in an NFL game summary, products mentioned in an AWS keynote transcript, or […]