AWS Machine Learning Blog
Create a next generation chat assistant with HAQM Bedrock, HAQM Connect, HAQM Lex, LangChain, and WhatsApp
This post is co-written with Harrison Chase, Erick Friis and Linda Ye from LangChain.
Generative AI is set to revolutionize user experiences over the next few years. A crucial step in that journey involves bringing in AI assistants that intelligently use tools to help customers navigate the digital landscape. In this post, we demonstrate how to deploy a contextual AI assistant. Built using HAQM Bedrock Knowledge Bases, HAQM Lex, and HAQM Connect, with WhatsApp as the channel, our solution provides users with a familiar and convenient interface.
HAQM Bedrock Knowledge Bases gives foundation models (FMs) and agents contextual information from your company’s private data sources for Retrieval Augmented Generation (RAG) to deliver more relevant, accurate, and customized responses. It also offers a powerful solution for organizations seeking to enhance their generative AI–powered applications. This feature simplifies the integration of domain-specific knowledge into conversational AI through native compatibility with HAQM Lex and HAQM Connect. By automating document ingestion, chunking, and embedding, it eliminates the need to manually set up complex vector databases or custom retrieval systems, significantly reducing development complexity and time.
The result is improved accuracy in FM responses, with reduced hallucinations due to grounding in verified data. Cost efficiency is achieved through minimized development resources and lower operational costs compared to maintaining custom knowledge management systems. The solution’s scalability quickly accommodates growing data volumes and user queries thanks to AWS serverless offerings. It also uses the robust security infrastructure of AWS to maintain data privacy and regulatory compliance. With the ability to continuously update and add to the knowledge base, AI applications stay current with the latest information. By choosing HAQM Bedrock Knowledge Bases, organizations can focus on creating value-added AI applications while AWS handles the intricacies of knowledge management and retrieval, enabling faster deployment of more accurate and capable AI solutions with less effort.
Prerequisites
To implement this solution, you need the following:
- An AWS account with permissions to create resources in HAQM Bedrock, HAQM Lex, HAQM Connect, and AWS Lambda.
- Model access enabling Anthropic’s Claude 3 Haiku model on HAQM Bedrock. Follow the steps at Access HAQM Bedrock foundation models.
- A WhatsApp business account to integrate with HAQM Connect.
- Product documentation, knowledge articles, or other relevant data to ingest into the knowledge base in a compatible format such as PDF or text.
Solution overview
This solution uses several key AWS AI services to build and deploy the AI assistant:
- HAQM Bedrock – HAQM Bedrock is a fully managed service that offers a choice of high-performing FMs from leading AI companies such as AI21 Labs, Anthropic, Cohere, Meta, Mistral AI, Stability AI, and HAQM through a single API, along with a broad set of capabilities you need to build generative AI applications with security, privacy, and responsible AI
- HAQM Bedrock Knowledge Bases – Gives the AI assistant contextual information from a company’s private data sources
- HAQM OpenSearch Service – Works as vector store that is natively supported by HAQM Bedrock Knowledge Bases
- HAQM Lex – Enables building the conversational interface for the AI assistant, including defining intents and slots
- HAQM Connect – Powers the integration with WhatsApp to make the AI assistant available to users on the popular messaging application
- AWS Lambda – Runs the code to integrate the services and implement the LangChain agent that forms the core logic of the AI assistant
- HAQM API Gateway – Receives the incoming requests triggered from WhatsApp and routes the request to AWS Lambda for further processing
- HAQM DynamoDB – Stores the messages received and generated to enable conversation memory
- HAQM SNS – Handles the routing of the outgoing response from HAQM Connect
- LangChain – Provides a powerful abstraction layer for building the LangChain agent that helps your FMs perform context-aware reasoning
- LangSmith – Uploads agent traces to LangSmith for added observability, including debugging, monitoring, and testing and evaluation capabilities
The following diagram illustrates the architecture.
Flow description
Numbers in red on the right side of the diagram illustrate the data ingestion process:
- Upload files to HAQM Simple Storage Service (HAQM S3) Data Source
- New files trigger Lambda Function
- Lambda Function invokes sync operation of the knowledge base data source
- HAQM Bedrock Knowledge Bases fetches the data from HAQM S3, chunks it, and generates the embeddings through the FM of your selection
- HAQM Bedrock Knowledge Bases stores the embeddings in HAQM OpenSearch Service
Numbers on the left side of the diagram illustrate the messaging process:
- User initiates communication by sending a message through WhatsApp to the webhook hosted on .
- HAQM API Gateway routes the incoming message to the inbound message handler, executed on AWS Lambda.
- The inbound message handler records the user’s contact details in HAQM DynamoDB.
- For first-time users, the inbound message handler establishes a new session in HAQM Connect and logs it in DynamoDB. For returning users, it resumes their existing HAQM Connect session.
- HAQM Connect forwards the user’s message to HAQM Lex for natural language processing.
- HAQM Lex triggers the LangChain AI assistant, implemented as a Lambda function.
- The LangChain AI assistant retrieves the conversation history from DynamoDB.
- Using HAQM Bedrock Knowledge Bases, the LangChain AI assistant fetches relevant contextual information.
- The LangChain AI assistant compiles a prompt, incorporating context data and the user’s query, and submits it to a FM running on HAQM Bedrock.
- HAQM Bedrock processes the input and returns the model’s response to the LangChain AI assistant.
- The LangChain AI assistant relays the model’s response back to HAQM Lex.
- HAQM Lex transmits the model’s response to HAQM Connect.
- HAQM Connect publishes the model’s response to HAQM Simple Notification Service (HAQM SNS).
- HAQM SNS triggers the outbound message handler Lambda function.
- The outbound message handler retrieves the relevant chat contact information from HAQM DynamoDB.
- The outbound message handler dispatches the response to the user through Meta’s WhatsApp API.
Deploying this AI assistant involves three main steps:
- Create the knowledge base using HAQM Bedrock Knowledge Bases and ingest relevant product documentation, FAQs, knowledge articles, and other useful data that the AI assistant can use to answer user questions. The data should cover the key use cases and topics the AI assistant will support.
- Create a LangChain agent that powers the AI assistant’s logic. The agent is implemented in a Lambda function and uses the knowledge base as its primary tool to look up information. Deploying the agent with other resources is automated through the provided AWS CloudFormation template. See the list of resources in the next section.
- Create the HAQM Connect instance and configure the WhatsApp integration. This allows users to chat with the AI assistant using WhatsApp, providing a familiar interface and enabling rich interactions such as images and buttons. WhatsApp’s popularity improves the accessibility of the AI assistant.
Solution deployment
We’ve provided pre-built AWS CloudFormation templates that deploy everything you need in your AWS account.
- Sign in to the AWS console if you aren’t already.
- Choose the following Launch Stack button to open the CloudFormation console and create a new stack.
- Enter the following parameters:
StackName
: Name your Stack, for example,WhatsAppAIStack
LangchainAPIKey
: The API key generated through LangChain
Region | Deploy button | Template URL – use to upgrade existing stack to a new release | AWS CDK stack to customize as needed |
N. Virginia (us-east-1) | ![]() |
YML | GitHub |
- Check the box to acknowledge that you are creating AWS Identity and Access Management (IAM) resources and choose Create Stack.
- Wait for the stack creation to be complete in approximately 10 minutes, which will create the following:
- LangChain agent
- HAQM Lex bot
- HAQM Bedrock Knowledge Base
- The vector store (HAQM OpenSearch Serverless)
- Lambdas (for data ingestion and providers)
- Data source (HAQM S3)
- DynamoDB
- Parameter Store for the LangChain API key
- IAM roles and permissions
- Upload files to the data source (HAQM S3) created for WhatsApp. As soon as you upload a file, the data source will synchronize automatically.
- To test the agent, on the HAQM Lex console, select the most recently created assistant. Choose English, choose Test, and send it a message.
Create the HAQM Connect instance and integrate WhatsApp
Configure HAQM Connect to integrate with your WhatsApp business account and enable the WhatsApp channel for the AI assistant:
- Navigate to HAQM Connect in the AWS console. If you haven’t already, create an instance. Copy your Instance ARN under Distribution settings. You will need this information later to link your WhatsApp business account.
- Choose your instance, then in the navigation panel, choose Flows. Scroll down and select HAQM Lex. Select your bot and choose Add HAQM Lex Bot.
- In the navigation panel, choose Overview. Under Access Information, choose Log in for emergency access.
- On the HAQM Connect console, under Routing in the navigation panel, choose Flows. Choose Create flow. Drag a Get customer input block onto the flow. Select the block. Select Text-to-speech or chat text and add an intro message such as, “Hello, how can I help you today?” Scroll down and choose HAQM Lex, then select the HAQM Lex bot you created in step 2.
- After you save the block, add another block called “Disconnect.” Drag the Entry arrow to the Get customer input and the Get customer input arrow to Disconnect. Choose Publish.
- After it’s published, choose Show additional flow information at the bottom of the navigation panel. Copy the flow’s HAQM Resource Name (ARN), which you will need to deploy the WhatsApp integration. The following screenshot shows the HAQM Connect console with the flow.
- Deploy the WhatsApp integration as detailed in Provide WhatsApp messaging as a channel with HAQM Connect.
Testing the solution
Interact with the AI assistant through WhatsApp, as shown in the following video:
Clean up
To avoid incurring ongoing costs, delete the resources after you are done:
- Delete the CloudFormation stacks.
- Delete the HAQM Connect instance.
Conclusion
This post showed you how to create an intelligent conversational AI assistant by integrating HAQM Bedrock, HAQM Lex, and HAQM Connect and deploying it on WhatsApp.
The solution ingests relevant data into a knowledge base on HAQM Bedrock Knowledge Bases, implements a LangChain agent that uses the knowledge base to answer questions, and makes the agent available to users through WhatsApp. This provides an accessible, intelligent AI assistant that can guide users through your company’s products and services.
Possible next steps include customizing the AI assistant for your specific use case, expanding the knowledge base, and analyzing conversation logs using LangSmith to identify issues, improve errors, and break down performance bottlenecks in your FM call sequence.
About the Authors
Kenton Blacutt is an AI Consultant within the GenAI Innovation Center. He works hands-on with customers helping them solve real-world business problems with cutting edge AWS technologies, especially HAQM Q and Bedrock. In his free time, he likes to travel, experiment with new AI techniques, and run an occasional marathon.
Lifeth Álvarez is a Cloud Application Architect at HAQM. She enjoys working closely with others, embracing teamwork and autonomous learning. She likes to develop creative and innovative solutions, applying special emphasis on details. She enjoys spending time with family and friends, reading, playing volleyball, and teaching others.
Mani Khanuja is a Tech Lead – Generative AI Specialist, author of the book Applied Machine Learning and High Performance Computing on AWS, and a member of the Board of Directors for Women in Manufacturing Education Foundation Board. She leads machine learning projects in various domains such as computer vision, natural language processing, and generative AI. She speaks at internal and external conferences such as AWS re:Invent, Women in Manufacturing West, YouTube webinars, and GHC 23. In her free time, she likes to go for long runs along the beach.
Linda Ye leads product marketing at LangChain. Previously, she worked at Sentry, Splunk, and Harness, driving product and business value for technical audiences, and studied economics at Stanford. In her free time, Linda enjoys writing half-baked novels, playing tennis, and reading.
Erick Friis, Founding Engineer at LangChain, currently spends most of his time on the open source side of the company. He’s an ex-founder with a passion for language-based applications. He spends his free time outdoors on skis or training for triathlons.
Harrison Chase is the CEO and cofounder of LangChain, an open source framework and toolkit that helps developers build context-aware reasoning applications. Prior to starting LangChain, he led the ML team at Robust Intelligence, led the entity linking team at Kensho, and studied statistics and computer science at Harvard.