AWS Machine Learning Blog
Build scalable containerized RAG based generative AI applications in AWS using HAQM EKS with HAQM Bedrock
In this post, we demonstrate a solution using HAQM Elastic Kubernetes Service (EKS) with HAQM Bedrock to build scalable and containerized RAG solutions for your generative AI applications on AWS while bringing your unstructured user file data to HAQM Bedrock in a straightforward, fast, and secure way.
How Hexagon built an AI assistant using AWS generative AI services
Recognizing the transformative benefits of generative AI for enterprises, we at Hexagon’s Asset Lifecycle Intelligence division sought to enhance how users interact with our Enterprise Asset Management (EAM) products. Understanding these advantages, we partnered with AWS to embark on a journey to develop HxGN Alix, an AI-powered digital worker using AWS generative AI services. This blog post explores the strategy, development, and implementation of HxGN Alix, demonstrating how a tailored AI solution can drive efficiency and enhance user satisfaction.
Build an intelligent community agent to revolutionize IT support with HAQM Q Business
In this post, we demonstrate how your organization can reduce the end-to-end burden of resolving regular challenges experienced by your IT support teams—from understanding errors and reviewing diagnoses, remediation steps, and relevant documentation, to opening external support tickets using common third-party services such as Jira.
Elevate marketing intelligence with HAQM Bedrock and LLMs for content creation, sentiment analysis, and campaign performance evaluation
In the media and entertainment industry, understanding and predicting the effectiveness of marketing campaigns is crucial for success. Marketing campaigns are the driving force behind successful businesses, playing a pivotal role in attracting new customers, retaining existing ones, and ultimately boosting revenue. However, launching a campaign isn’t enough; to maximize their impact and help achieve […]
How Deutsche Bahn redefines forecasting using Chronos models – Now available on HAQM Bedrock Marketplace
Whereas traditional forecasting methods typically rely on statistical modeling, Chronos treats time series data as a language to be modeled and uses a pre-trained FM to generate forecasts — similar to how large language models (LLMs) generate texts. Chronos helps you achieve accurate predictions faster, significantly reducing development time compared to traditional methods. In this post, we share how Deutsche Bahn is redefining forecasting using Chronos models, and provide an example use case to demonstrate how you can get started using Chronos.
Use custom metrics to evaluate your generative AI application with HAQM Bedrock
Now with HAQM Bedrock, you can develop custom evaluation metrics for both model and RAG evaluations. This capability extends the LLM-as-a-judge framework that drives HAQM Bedrock Evaluations. In this post, we demonstrate how to use custom metrics in HAQM Bedrock Evaluations to measure and improve the performance of your generative AI applications according to your specific business requirements and evaluation criteria.
Build a gen AI–powered financial assistant with HAQM Bedrock multi-agent collaboration
This post explores a financial assistant system that specializes in three key tasks: portfolio creation, company research, and communication. This post aims to illustrate the use of multiple specialized agents within the HAQM Bedrock multi-agent collaboration capability, with particular emphasis on their application in financial analysis.
WordFinder app: Harnessing generative AI on AWS for aphasia communication
In this post, we showcase how Dr. Kori Ramajoo, Dr. Sonia Brownsett, Prof. David Copland, from QARC, and Scott Harding, a person living with aphasia, used AWS services to develop WordFinder, a mobile, cloud-based solution that helps individuals with aphasia increase their independence through the use of AWS generative AI technology.
Get faster and actionable AWS Trusted Advisor insights to make data-driven decisions using HAQM Q Business
In this post, we show how to create an application using HAQM Q Business with Jira integration that used a dataset containing a Trusted Advisor detailed report. This solution demonstrates how to use new generative AI services like HAQM Q Business to get data insights faster and make them actionable.
Best practices for Meta Llama 3.2 multimodal fine-tuning on HAQM Bedrock
In this post, we share comprehensive best practices and scientific insights for fine-tuning Meta Llama 3.2 multimodal models on HAQM Bedrock. By following these guidelines, you can fine-tune smaller, more cost-effective models to achieve performance that rivals or even surpasses much larger models—potentially reducing both inference costs and latency, while maintaining high accuracy for your specific use case.