This Guidance demonstrates new tools for media workflows in cloud environments as part of the Cloud-Native Agile Production (CNAP) program. This initiative builds upon the Time Addressable Media Store (TAMS) API specification, originally developed by the British Broadcasting Corporation's Research & Development team. The aim of the CNAP program is to drive industry adoption of TAMS as a cloud-native, open, and interoperable framework for fast-turnaround media workflows in the creation of News, Sports, and Entertainment content. TAMS stores media as discrete chunks in object storage, accessible through an open-source API. This approach eliminates common challenges found in traditional cloud-based media workflows. By using timing and identity as primary identifiers, TAMS enables content-centric workflows that reduce duplicate content and scale effectively, unlike traditional file-based systems.
Please note: [Disclaimer]
Architecture Diagram

-
TAMS concept overview
-
TAMS data structure
-
AWS open source TAMS API
-
AWS open source TAMS tools
-
TAMS concept overview
-
This architecture diagram shows the concept about how a Time Addressable Media Store (TAMS) sits at the core of a fast-turnaround workflow for processing live or near-live video and audio content.
Step 1
All media is stored within a Time Addressable Media Store (TAMS). This holds chunked media on object storage with an API to provide the link between content and the media essence.Step 2
Live video feeds are uploaded as small chunks of media and registered with the TAMS to provide the effect of growing content.Step 3
File-based content can be uploaded natively or chunked prior to import as required.Step 4
Content can be processed in near real-time, generating additional versions, such as proxies, triggered through notifications from the storage system.
Step 5
Content analysis can occur asynchronously and in near real-time, enabling rapid access to the outputs of machine learning and artificial intelligence (AI/ML) models. Examples include live subtitling, highlights generation, and content logging.
Step 6
Simple clip-based editing can be performed, and the resulting edits can be published back to the TAMS, referencing the original content.Step 7
Craft editing can access content from the TAMS and publish back only new segments.Step 8
Content from the store can be played back as a real-time video stream into production galleries of linear channel playout facilities.Step 9
TAMS API can be easily converted into HTTP Live Streaming (HLS) manifests to enable live or on-demand content streaming from the TAMS.Step 10
Clips and files can be exported from the store for alternative purposes, such as the rapid distribution of content onto social media platforms.Step 11
The Media Asset Management (MAM) system maintains references to the content stored within the TAMS, along with the associated rich editorial and time-based metadata.
-
TAMS data structure
-
This architecture diagram shows the high-level data structure represented in the TAMS API specification. This diagram establishes the connection between the content that a user would be aware of and the actual media essence, which is stored in multiple formats and segments on the object storage system.
Step 1
In the TAMS data structure, the parent source is equal to the actual content that a user interacts with. This parent source could be an editorial version of the content or a clip.
Step 2
The secondary level source within the data structure allows for the aggregation of the various media types, such as video, audio, or data, into a cohesive collection.Step 3
The "flow" represents the technical manifestation of the content. This construct contains all the technical metadata necessary to describe the underlying media segments, such as bitrate, resolution, and frame rate.Step 4
Multiple flows can exist for a single piece of content, enabling the representation of different formats, such as HD and proxy, to coexist.Step 5
Flow types include video, audio, and data, allowing the different content types to be referenced in the store.Step 6
Segments are held on object storage and referenced in the TAMS API. The only interaction between the store and the segments occurs during deletion management.Step 7
The segments are linked to a flow and exist within the context of that flow's virtual timeline. A time range format, expressed as Epoch time plus nanoseconds, is used to represent the position of each segment along the timeline.
Notes
A segment can be referenced in one or more flows, allowing the reuse of segments between content without duplication at the storage layer.The TAMS maintains only the metadata required for the storage and referencing of the media content. The rich metadata should be managed within separate systems, such as the Media Asset Management system.
-
AWS open source TAMS API
-
This architecture diagram shows the components and data flows within the AWS open source implementation of the TAMS API.
Step 1
HAQM Cognito provides user and system-to-system authentication.Step 2
The API is presented through HAQM API Gateway, including the validation of requests using the OpenAPI specification.Step 3
AWS Lambda functions process the API requests. Separate functions exist for the services, sources, flows, segments, and delete request endpoints.Step 4
Source and flow metadata is stored in an HAQM Neptune graph database.Step 5
Segment metadata is stored in HAQM DynamoDB for speed of retrieval.Step 6
Delete requests are forwarded to HAQM Simple Queue Service (HAQM SQS) for asynchronous deletion.Step 7
A Lambda function is responsible for processing delete operations by forwarding the requests to a secondary HAQM SQS queue, which then handles the deletion of the corresponding HAQM Simple Storage Service (HAQM S3) objects.Step 8
After the required wait period, a Lambda function evaluates delete requests and removes only unused objects from HAQM S3.
Step 9
Events from core API functions are sent to HAQM EventBridge for subsequent reuse by other systems.Step 10
An optional Lambda function can process webhook requests to external systems according to the specification.
-
AWS open source TAMS tools
-
This architecture diagram demonstrates how the multiple components of the AWS TAMS Tools repository can be used alongside the core AWS open source TAMS implementation.
Step 1
The TAMS Store capability is provided by the AWS open source implementation.
Step 2
A React-based user interface application, deployed through AWS Amplify, enables users to navigate the store, view video content through the HLS endpoint, and control the live and file-based ingestion processes.Step 3
Live video ingestion is facilitated using AWS Elemental MediaLive, which creates segments on HAQM S3 that are subsequently uploaded to the TAMS.Step 4
File-based import is enabled using AWS Elemental MediaConvert, orchestrated by AWS Step Functions, to chunk up the media and upload into the TAMS.Step 5
An HLS endpoint is provided to convert the TAMS native API calls into a set of HLS manifests to allow content to be played back within a web-based HLS player.
Step 6
The media processing workflow uses event notifications from the TAMS to trigger additional post-processing of the ingested content. This includes the extraction of images and the creation of proxy versions using Lambda, as well as the export of concatenated files for integration with other systems.
Well-Architected Pillars

The AWS Well-Architected Framework helps you understand the pros and cons of the decisions you make when building systems in the cloud. The six pillars of the Framework allow you to learn architectural best practices for designing and operating reliable, secure, efficient, cost-effective, and sustainable systems. Using the AWS Well-Architected Tool, available at no charge in the AWS Management Console, you can review your workloads against these best practices by answering a set of questions for each pillar.
The architecture diagram above is an example of a Solution created with Well-Architected best practices in mind. To be fully Well-Architected, you should follow as many Well-Architected best practices as possible.
-
Operational Excellence
The AWS open source implementation of the TAMS API uses AWS X-Ray to trace requests through the serverless infrastructure, including API Gateway, Lambda, and DynamoDB. The X-Ray service aids developers and support teams in tracking and analyzing requests as they flow through the various components of the AWS open-source implementation of the TAMS API.
In addition, all logs and metrics are collected within HAQM CloudWatch to facilitate monitoring and analysis. The metrics collected within CloudWatch support the creation of dashboards and the configuration of alarms.
-
Security
The TAMS API uses HAQM S3 pre-signed URLs to provide consumers with time-limited access to only the required segments, helping ensure that access control is managed centrally by the API, regardless of the consumer's location, whether within AWS or on-premises.
The AWS open-source implementation of the TAMS specification uses HAQM Cognito by default for authentication, providing OAuth2-based access control on the API, in addition to the ability to federate with other authentication providers. The current API implementation supports coarse-grained, role-based permissions across the various CRUD operations, with the team actively working on extending this to incorporate attribute-based access control (ABAC) in the near future.
-
Reliability
The AWS open-source implementation of the TAMS API exclusively uses AWS Regional-level services, including HAQM S3, API Gateway, Lambda, and DynamoDB. This design approach eliminates the need for AWS customers to manage Availability Zone-level resilience. Additionally, all the services employed will automatically scale and recover from any underlying issues.
-
Performance Efficiency
In the AWS open-source implementation of the TAMS, the database technologies have been carefully selected to provide optimal performance for the diverse access patterns. The sources and flows require complex linking and filtering capabilities, for which Neptune, a graph database, has been chosen as the appropriate solution. For the segments, the access patterns are more straightforward, but speed and performance are critical to handle the ingestion of new segments as they arrive. As a result, DynamoDB has been utilized to deliver the required performance characteristics.
-
Cost Optimization
The TAMS-based approach eliminates the need for high-performance file storage alongside the object storage, as it maintains a single copy of the media on lower-cost object storage. The API facilitates the reuse of media segments across different content, thereby deduplicating the media at the storage level and resulting in savings in both storage space and cost.
The AWS open-source implementation of the TAMS is built around serverless components that scale and incur costs based on usage. Given that most media workloads exhibit peaky demand patterns, this design approach reduces costs to just the persistence layer (HAQM S3, DynamoDB, Neptune) when the system is not actively in use.
-
Sustainability
The TAMS approach to live media workflows is inherently more optimized and, consequently, more sustainable than traditional methods. At the storage level, there is no longer a requirement for high-performance file systems alongside HAQM S3 object storage, and the storage can be deduplicated, resulting in reduced space requirements.
The use of serverless technologies helps ensure that during periods of low usage, the resources are automatically scaled back, thereby reducing the environmental impact. In contrast, traditional on-premises broadcast solutions would typically remain operational 24/7, regardless of usage patterns.
The edit-by-reference model employed in the TAMS has the potential to reduce the need for rendering on edit workstations, thereby saving compute time and potentially allowing the use of smaller compute instances.
Related Content

Time addressable media store
This sample code deploys the AWS Infrastructure required to create a sample implementation of the BBC TAMS API.
Time addressable media store tools
This sample code contains a set of tools to help customers and partners get started with the TAMS API.
Cloud Native Agile Production (CNAP) project
Disclaimer
The sample code; software libraries; command line tools; proofs of concept; templates; or other related technology (including any of the foregoing that are provided by our personnel) is provided to you as AWS Content under the AWS Customer Agreement, or the relevant written agreement between you and AWS (whichever applies). You should not use this AWS Content in your production accounts, or on production or other critical data. You are responsible for testing, securing, and optimizing the AWS Content, such as sample code, as appropriate for production grade use based on your specific quality control practices and standards. Deploying AWS Content may incur AWS charges for creating or using AWS chargeable resources, such as running HAQM EC2 instances or using HAQM S3 storage.
References to third-party services or organizations in this Guidance do not imply an endorsement, sponsorship, or affiliation between HAQM or AWS and the third party. Guidance from AWS is a technical starting point, and you can customize your integration with third-party services when you deploy the architecture.