AWS Database Blog
HAQM Timestream for HAQM Connect real-time monitoring
HAQM Connect is an easy-to-use cloud contact center solution that helps companies of any size deliver superior customer service at a lower cost. Connect has many real-time monitoring capabilities. For requirements that go beyond those supported out of the box, HAQM Connect also provides you with data and APIs you can use to implement your own custom real-time monitoring solution. Such a solution enables you to monitor your contact center with your standard monitoring tools, create custom metrics that reflect your business rules, visualize the data according to user preferences, secure access to the data with custom rules, aggregate third-party data from other systems, and integrate the real-time data into other applications.
Organizations benefit from real-time or near-real time monitoring of the activities in their contact centers. Assessing agent performance and identifying issues while calls are happening empowers supervisors to prevent negative customer experiences with swift corrective actions. Dashboards help teams focus on the metrics that impact their success.
In this post, we show you how to use HAQM Timestream to build the data layer for HAQM Connect real-time monitoring. To get started, you can use the AWS CloudFormation template and step-by-step instructions provided in this post. The solution can be deployed in multiple Regions. Because data is collected in an HAQM Timestream database, the solution must be deployed in at least one Region where HAQM Timestream is available.
Solution overview
The following architecture diagram provides a high-level view of the solution.
The solution uses the following AWS services:
- AWS CloudFormation
- HAQM Connect
- HAQM EventBridge
- AWS IAM Identity Center (successor to AWS Single Sign-On)
- AWS Identity and Access Management (IAM)
- HAQM Kinesis
- AWS Lambda
- AWS Organizations
- HAQM Timestream
HAQM Connect is by default integrated with HAQM EventBridge for publishing contact events whenever the state of a contact changes in the HAQM Connect contact center. HAQM Connect can also integrate with HAQM Kinesis for publishing agent events whenever an agent’s state and activities change.
As part of the solution workflow, EventBridge and Kinesis trigger the corresponding AWS Lambda functions provided for each entity type. Each Lambda parses the incoming event message, transforms the data to a tabular format, and persists it in the HAQM Timestream database tables.
To enrich the data received by the previous events, a Lambda triggered by EventBridge Scheduler periodically retrieves queue and user information from HAQM Connect through HAQM Connect APIs to persist in the HAQM Timestream database tables. This enables advance queries such as querying tag values in a queue or the full name of an agent.
You can extend this architecture to ingest data from third-party data sources to support additional analytics use cases. You can create new tables in the Timestream database, ingest third-party data, and run queries that join the provided tables with the third-party data tables.
Deploying the solution in multiple AWS Regions
If you have HAQM Connect instances in multiple Regions, or if you have your HAQM Connect instance in a Region that does not support HAQM Timestream, you can deploy this solution to multiple Regions.
First, in a Region that supports HAQM Timestream, deploy the solution with the Deploy all resources including database option. For other Regions containing HAQM Connect instances from which you want to collect data, deploy the solution using the same stack name in each Region. Use the Deploy to collect HAQM Connect data from current option, indicating the Region where you deployed the Timestream database.
In the following example, where HAQM Timestream is in the us-east-1 Region, you would first deploy to us-east-1
with Deploy all resources including database – Current region supports HAQM Timestream, then deploy to ca-central-1
Region with the Deploy to collect HAQM Connect data from current Region – HAQM Timestream database is deployed to Region: us-east-1 option.
Prerequisites
Before you deploy the solution, ensure the following:
- You need an active AWS account with the permission to create and modify IAM roles.
- An HAQM Connect contact center instance must be present in the same AWS account and Region from which you will collect the data to visualize in your dashboards.
- To deploy with Deploy all resources including database – Current region supports HAQM Timestream option, your current Region must support HAQM Timestream. Having an HAQM Connect instance is optional in this Region.
- To deploy to other Regions, you must first complete a deployment that includes the Timestream database in another Region in the same account.
- To calculate the estimated cost of implementing this solution in your AWS account, refer to the AWS Pricing Calculator
Deploy data collection resources with AWS CloudFormation
To deploy the solution, complete the following steps:
- Choose Launch Stack to deploy the solution:
- Enter a unique stack name. Using the default name (
ConnectRealTime
) reduces the amount of reconfiguration in later steps. - For the AgentEventDataStreamArn parameter, enter the ARN of the Kinesis Data Stream that is configured to stream agent events in your HAQM Connect instance. If you don’t already have a stream configured, leave this parameter empty. The CloudFormation stack will create a new Kinesis Data Stream for you to set as the destination for agent events.
- Choose a DeploymentModel. This solution allows you to have one database in a Region that supports HAQM Timestream, and to collect data from HAQM Connect instances in other Regions.
The DeploymentModel parameter allows you to select whether to create a new database during deployment. Select Deploy all resources including database – Current region supports HAQM Timestream if your current Region supports HAQM Timestream and you want the database to be created in this Region. Beside the database, HAQM Connect integration components are created to collect data from the HAQM Connect instances in the current Region.Select one of the other options if you have already deployed the solution with the database to another Region and you want the solution you are deploying to the current Region to collect the data and store it in the database created in that remote Region. - Select the acknowledgement check box and choose Create stack.
If you see an error such as:You are trying to install the HAQM Timestream database and tables to a Region that does not support HAQM Timestream.
- When stack creation is complete, you can navigate to the Outputs tab to see the data stream.
- On the HAQM Connect console, navigate to your instance.
- In the Data Streaming section, choose the Kinesis Data Stream to be the destination for Agent Events and choose Save.
The data collection pipeline is complete. Make sure your contact center instance has some activity that creates some data. - Proceed to the HAQM Timestream console’s Query Editor (the Outputs tab in the CloudFormation stack also has a link for easy navigation).
- Choose the options menu (three dots) next to AgentEvent and choose Preview data.
- Verify the data with SQL queries.
Visualization
HAQM Timestream supports integration with Grafana, HAQM QuickSight, JDBC, ODBC, and variety of third-party tools. Refer to HAQM Connect real time monitoring using HAQM Managed Grafana and HAQM Timestream to use sample Grafana dashboards provided. You can use the sample dashboards as a starting point for your own visualization solution and improve them to serve your business requirements.
Notable features of Timestream
HAQM Timestream is a cost-effective solution for real time monitoring. Timestream’s storage management feature reduces storage costs by moving older data to a cost-effective magnetic storage tier, and then deleting data when its magnetic store retention period is over. Default retention values are set for each table. You can update these values. Increasing the value will enable users to query over a longer time frame. Decreasing the value will reduce the storage costs.
When you create a table in HAQM Timestream, you don’t have to provide a schema before ingesting data. Tables can adapt to changes in the structure of ingested data. If new fields appear in newly arrived data, they will automatically be added to the table schema.
HAQM Timestream scheduled queries can run aggregates, rollups and other queries, and store the results in a separate table. This new compact table reduces costs for repeated queries and dashboards, and can be retained longer if needed.
When data is needed for historical reports, or needed for another systems to ingest, you can use Timestream’s UNLOAD statement to export time series data to selected S3 buckets in either Apache Parquet or Comma Separated Values (CSV) format. UNLOAD provides you the flexibility to store, combine, and analyze your time series data with other services.
Clean up
To remove the resources created by this stack and prevent additional costs, perform the following steps:
- Delete the CloudFormation stack you created for this solution.
Depending on your deployment model, repeat these steps for every Region you deployed the solution by creating a CloudFormation stack.
Conclusion
In this post, you learned how to collect HAQM Connect real-time data in an HAQM Timestream database. The solution is deployed to your account using a CloudFormation template.
Try the solution and if you have feedback about this post, submit it in the comments section.
About the Authors
Mehmet Demir is a Principal Solutions Architect at HAQM Web Services (AWS) based in Toronto, Canada. He helps customers in building well-architected solutions that support business innovation.
Norbert Funke is a Sr. Timestream Specialist Solutions Architect at AWS based out of New York. He helps customers optimizing solutions based on time series data. Prior to joining AWS, he was working for a data consulting company owned by PwC on data architecture and data analytics.