AWS Machine Learning Blog

AWS machine learning supports Scuderia Ferrari HP pit stop analysis

Pit crews are trained to operate at optimum efficiency, although measuring their performance has been challenging, until now. In this post, we share how HAQM Web Services (AWS) is helping Scuderia Ferrari HP develop more accurate pit stop analysis techniques using machine learning (ML).

Safe Workplace

Accelerate edge AI development with SiMa.ai Edgematic with a seamless AWS integration

In this post, we demonstrate how to retrain and quantize a model using SageMaker AI and the SiMa.ai Palette software suite. The goal is to accurately detect individuals in environments where visibility and protective equipment detection are essential for compliance and safety.

visual language model

How Apoidea Group enhances visual information extraction from banking documents with multimodal models using LLaMA-Factory on HAQM SageMaker HyperPod

Building on this foundation of specialized information extraction solutions and using the capabilities of SageMaker HyperPod, we collaborate with APOIDEA Group to explore the use of large vision language models (LVLMs) to further improve table structure recognition performance on banking and financial documents. In this post, we present our work and step-by-step code on fine-tuning the Qwen2-VL-7B-Instruct model using LLaMA-Factory on SageMaker HyperPod.

Figure 1 – Vxceed's LimoConnect Q architecture

Vxceed secures transport operations with HAQM Bedrock

AWS partnered with Vxceed to support their AI strategy, resulting in the development of LimoConnect Q, an innovative ground transportation management solution. Using AWS services including HAQM Bedrock and Lambda, Vxceed successfully built a secure, AI-powered solution that streamlines trip booking and document processing.

Cost-effective AI image generation with PixArt-Sigma inference on AWS Trainium and AWS Inferentia

This post is the first in a series where we will run multiple diffusion transformers on Trainium and Inferentia-powered instances. In this post, we show how you can deploy PixArt-Sigma to Trainium and Inferentia-powered instances.

Customize DeepSeek-R1 671b model using HAQM SageMaker HyperPod recipes – Part 2

In this post, we use the recipes to fine-tune the original DeepSeek-R1 671b parameter model. We demonstrate this through the step-by-step implementation of these recipes using both SageMaker training jobs and SageMaker HyperPod.

Build a financial research assistant using HAQM Q Business and HAQM QuickSight for generative AI–powered insights

In this post, we show you how HAQM Q Business can help augment your generative AI needs in all the abovementioned use cases and more by answering questions, providing summaries, generating content, and securely completing tasks based on data and information in your enterprise systems.

Securing HAQM Bedrock Agents: A guide to safeguarding against indirect prompt injections

Generative AI tools have transformed how we work, create, and process information. At HAQM Web Services (AWS), security is our top priority. Therefore, HAQM Bedrock provides comprehensive security controls and best practices to help protect your applications and data. In this post, we explore the security measures and practical strategies provided by HAQM Bedrock Agents to safeguard your AI interactions against indirect prompt injections, making sure that your applications remain both secure and reliable.