AWS Machine Learning Blog

Category: Foundational (100)

Racing beyond DeepRacer: Debut of the AWS LLM League

The AWS LLM League was designed to lower the barriers to entry in generative AI model customization by providing an experience where participants, regardless of their prior data science experience, could engage in fine-tuning LLMs. Using HAQM SageMaker JumpStart, attendees were guided through the process of customizing LLMs to address real business challenges adaptable to their domain.

How AWS Sales uses generative AI to streamline account planning

Every year, AWS Sales personnel draft in-depth, forward looking strategy documents for established AWS customers. These documents help the AWS Sales team to align with our customer growth strategy and to collaborate with the entire sales team on long-term growth ideas for AWS customers. In this post, we showcase how the AWS Sales product team built the generative AI account plans draft assistant.

Revolutionizing customer service: MaestroQA’s integration with HAQM Bedrock for actionable insight

In this post, we dive deeper into one of MaestroQA’s key features—conversation analytics, which helps support teams uncover customer concerns, address points of friction, adapt support workflows, and identify areas for coaching through the use of HAQM Bedrock. We discuss the unique challenges MaestroQA overcame and how they use AWS to build new features, drive customer insights, and improve operational inefficiencies.

Architecture of AWS Field Advisor using HAQM Q Business

How AWS sales uses HAQM Q Business for customer engagement

In April 2024, we launched our AI sales assistant, which we call Field Advisor, making it available to AWS employees in the Sales, Marketing, and Global Services organization, powered by HAQM Q Business. Since that time, thousands of active users have asked hundreds of thousands of questions through Field Advisor, which we have embedded in our customer relationship management (CRM) system, as well as through a Slack application.

Figure 2: Depicting high level architecture of Tecton & SageMaker showing end-to-end feature lifecycle

Real value, real time: Production AI with HAQM SageMaker and Tecton

In this post, we discuss how HAQM SageMaker and Tecton work together to simplify the development and deployment of production-ready AI applications, particularly for real-time use cases like fraud detection. The integration enables faster time to value by abstracting away complex engineering tasks, allowing teams to focus on building features and use cases while providing a streamlined framework for both offline training and online serving of ML models.

Enabling production-grade generative AI: New capabilities lower costs, streamline production, and boost security

Enabling production-grade generative AI: New capabilities lower costs, streamline production, and boost security

As generative AI moves from proofs of concept (POCs) to production, we’re seeing a massive shift in how businesses and consumers interact with data, information—and each other. In what we consider “Act 1” of the generative AI story, we saw previously unimaginable amounts of data and compute create models that showcase the power of generative […]

AI21 Labs Jamba-Instruct model is now available in HAQM Bedrock

We are excited to announce the availability of the Jamba-Instruct large language model (LLM) in HAQM Bedrock. Jamba-Instruct is built by AI21 Labs, and most notably supports a 256,000-token context window, making it especially useful for processing large documents and complex Retrieval Augmented Generation (RAG) applications. What is Jamba-Instruct Jamba-Instruct is an instruction-tuned version of […]

Boost your content editing with Contentful and HAQM Bedrock

This post is co-written with Matt Middleton from Contentful. Today, jointly with Contentful, we are announcing the launch of the AI Content Generator powered by HAQM Bedrock. The AI Content Generator powered by HAQM Bedrock is an app available on the Contentful Marketplace that allows users to create, rewrite, summarize, and translate content using cutting-edge […]