AWS Machine Learning Blog

End to end architecture of a domain aware data processing pipeline for insurance documents

Build a domain‐aware data preprocessing pipeline: A multi‐agent collaboration approach

In this post, we introduce a multi-agent collaboration pipeline for processing unstructured insurance data using HAQM Bedrock, featuring specialized agents for classification, conversion, and metadata extraction. We demonstrate how this domain-aware approach transforms diverse data formats like claims documents, videos, and audio files into metadata-rich outputs that enable fraud detection, customer 360-degree views, and advanced analytics.

Automating complex document processing: How Onity Group built an intelligent solution using HAQM Bedrock

In this post, we explore how Onity Group, a financial services company specializing in mortgage servicing and origination, transformed their document processing capabilities using HAQM Bedrock and other AWS services. The solution helped Onity achieve a 50% reduction in document extraction costs while improving overall accuracy by 20% compared to their previous OCR and AI/ML solution.

HERE Technologies boosts developer productivity with new generative AI-powered coding assistant

HERE collaborated with the GenAIIC. Our joint mission was to create an intelligent AI coding assistant that could provide explanations and executable code solutions in response to users’ natural language queries. The requirement was to build a scalable system that could translate natural language questions into HTML code with embedded JavaScript, ready for immediate rendering as an interactive map that users can see on screen.

ml-17088-solution-architecture

Set up a custom plugin on HAQM Q Business and authenticate with HAQM Cognito to interact with backend systems

In this post, we demonstrate how to build a custom plugin with HAQM Q Business for backend integration. This plugin can integrate existing systems, including third-party systems, with little to no development in just weeks and automate critical workflows. Additionally, we show how to safeguard the solution using HAQM Cognito and AWS IAM Identity Center, maintaining the safety and integrity of sensitive data and workflows.

AWS machine learning supports Scuderia Ferrari HP pit stop analysis

Pit crews are trained to operate at optimum efficiency, although measuring their performance has been challenging, until now. In this post, we share how HAQM Web Services (AWS) is helping Scuderia Ferrari HP develop more accurate pit stop analysis techniques using machine learning (ML).

Safe Workplace

Accelerate edge AI development with SiMa.ai Edgematic with a seamless AWS integration

In this post, we demonstrate how to retrain and quantize a model using SageMaker AI and the SiMa.ai Palette software suite. The goal is to accurately detect individuals in environments where visibility and protective equipment detection are essential for compliance and safety.

visual language model

How Apoidea Group enhances visual information extraction from banking documents with multimodal models using LLaMA-Factory on HAQM SageMaker HyperPod

Building on this foundation of specialized information extraction solutions and using the capabilities of SageMaker HyperPod, we collaborate with APOIDEA Group to explore the use of large vision language models (LVLMs) to further improve table structure recognition performance on banking and financial documents. In this post, we present our work and step-by-step code on fine-tuning the Qwen2-VL-7B-Instruct model using LLaMA-Factory on SageMaker HyperPod.

Figure 1 – Vxceed's LimoConnect Q architecture

Vxceed secures transport operations with HAQM Bedrock

AWS partnered with Vxceed to support their AI strategy, resulting in the development of LimoConnect Q, an innovative ground transportation management solution. Using AWS services including HAQM Bedrock and Lambda, Vxceed successfully built a secure, AI-powered solution that streamlines trip booking and document processing.