AWS Partner Network (APN) Blog

Build Production-Ready Generative AI Applications with Orkes and HAQM Bedrock

Orkes
Connect with Orkes

By Viren Baraiya, CTO at Orkes
By Marina Novikova, Senior Partner Solution Architect at AWS
By Shashi Raina, Senior Partner Solution Architect at AWS

In the rapidly evolving landscape of generative artificial intelligence (GenAI), building powerful and scalable applications requires more than just foundation models. The key lies in orchestrating complex workflows that manage the intricate interplay of data, models, and services. As organizations push the boundaries of GenAI, they’ve recognized the critical importance of robust orchestration tools. These applications are no longer simple, single-step processes. Instead, they involve complex chains of operations, from data preprocessing to output generation. Each operation requires careful coordination to ensure optimal performance, reliability, and scalability.

GenAI applications present unique challenges because of their non-deterministic nature. While existent frameworks make building these applications easier, their inherent stochasticity adds complexities when deployed in production environments. This causes a re-evaluation of infrastructure needs for applications incorporating GenAI models. These applications often need to integrate with existing systems, accommodate human oversight, and implement guardrails that weren’t traditionally part of application architectures. As a result, they create friction when scaling in production environments.

The orchestration layer emerges as a critical component in GenAI system architecture. Platforms like Orkes Conductor play a vital role in managing these complex workflows, working seamlessly with cloud services like AWS to provide the infrastructure and safeguards. This orchestration layer forms the foundation of modern generative AI applications. It enabled organizations to manage the unique runtime requirements, provide smooth integration with existing components, and implement essential checks and balances. Robust orchestration is increasingly crucial for successful GenAI application development and deployment.

The Benefits of Orchestration in GenAI

Orchestration simplifies complexity and unlocks speed. Here’s how it supports scalable GenAI development:

  • Workflow Management: GenAI applications often involve complex workflows with multiple steps and dependencies. Orchestration tools allow developers to define, manage, and automate these workflows efficiently. This helps ensure that each component of the application works in harmony, from data preprocessing to model training and inference.
  • Resource Optimization: Artificial IntelliAI workloads can be resource intensive. Orchestration tools help in efficiently allocating and managing computational resources, optimizing hardware utilization. While HAQM Bedrock manages infrastructure and provides prompt optimization capabilities, orchestration tools are responsible for input data preparation, batching, caching and output processing, which can be important when dealing with large-scale GenAI models.
  • Scalability: As GenAI applications grow in complexity and usage, scalability becomes a critical factor. Orchestration tools provide the framework for scaling applications horizontally and vertically, allowing them to handle increased loads and more complex tasks without a complete redesign.
  • Version Control and Reproducibility: In the growing field of GenAI, keeping track of different versions of models, datasets, and code is crucial. Orchestration tools often include features for version control and reproducibility, ensuring that experiments and deployments can be replicated and rolled back if necessary.
  • Integration and Interoperability: GenAI applications typically require integration with various data sources, APIs, and services. Orchestration tools facilitate seamless integration between different components and external services, allowing for more powerful and versatile applications.
  • Monitoring and Logging: Effective monitoring is essential for maintaining the performance and reliability of GenAI applications. Orchestration tools often come with built-in monitoring and logging capabilities, providing insights into system performance, model behavior, and potential issues.
  • Automated Testing and Deployment: Orchestration tools can automate the testing and deployment processes, ensuring that new versions of GenAI models or application components are thoroughly tested before being pushed to production. This reduces the risk of errors and improves overall system reliability.
  • Compliance and Governance: As GenAI applications often deal with sensitive data, ensuring compliance with regulations and internal governance policies is crucial. Orchestration tools can help implement and enforce these policies consistently across the application lifecycle.
  • Collaboration and Team Productivity: In large-scale GenAI projects, multiple teams often work together. Shared orchestration frameworks give development teams a clear system to build, test, and iterate faster and with fewer handoffs.
  • Cost Management: By optimizing resource usage and providing insights into system performance, orchestration tools can help manage and reduce the costs associated with running GenAI applications, especially in cloud environments.

AWS Step Functions: Orchestrating GenAI Workflows

At AWS, we’ve seen firsthand the power of orchestration through our AWS Step Functions service. When combined with HAQM Bedrock, Step Functions provides a robust framework for orchestrating GenAI workflows. This combination allows developers to create sophisticated, multi-step processes that can leverage multiple AI models, handle errors gracefully and scale effortlessly.

An AWS blog post recently showcased a workflow that combines text summarization and translation using various AI models and services. Figure 1 shows an example of the workflow, which includes HAQM Bedrock integration. This kind of complex, yet seamless integration is only possible with proper orchestration.

Step Functions workflow

Figure 1: Step Functions workflow example

The Orkes Conductor: A Powerful Ally in GenAI Orchestration

While AWS offers workflow capabilities with Step Functions to execute custom tasks, we also recognize the value that partner solutions bring to the table. One such solution that’s helping in the GenAI orchestration space is Orkes Agentic Workflows. It gives teams a visual, developer-friendly interface to build and run AI workflows plus a growing library of out-of-the-box large language models (LLM) asks to speed up development. For complex AI tasks, developers can create their own task workers in supported languages and share within the organization.

Orkes Conductor excels in creating complex workflows that integrate various AI models via HAQM Bedrock and other services. Services like HAQM RDS for PostgreSQL and its pgvector extension for Retrieval Augmented Generation (RAG) workflows with HAQM Bedrock, HAQM API Gateway for additional business logic and processing, and more. It offers robust features for error handling, retries, and monitoring of AI workflows – all critical components in building reliable GenAI applications mentioned above.

With Orkes, you can plug in your preferred models, LLMs, vector DBs, and APIs as native building blocks of your workflow. It covers use cases as simple as an LLM Text Complete task in response to a prompt, LLM generation of embeddings for documents that can be stored in a vector database, or complex parallel tasks execution, including human-in-the-loop (HITL), and communication with other organization services for additional data and context.

Figure 2 illustrates an Orkes workflow for article summarization that combines two key tasks. The workflow begins by retrieving article content from a provided URL using a pre-built LLM Get Document task, which can access text from various sources. Once the content is retrieved, the workflow uses HAQM Bedrock through an LLM Text Complete task with a pre-defined prompt to generate the article summary.

Application flow using Orkes Conductor

Figure 2: Application flow using Orkes Conductor

Pre-defined Orkes tasks allow users to create workflows quickly and without writing custom code.

Conclusion

By utilizing the Orkes solution, organizations can gain faster time-to-market and speed up their GenAI journey with the list of pre-defined LLM tasks. Teams can start quickly, even without deep GenAI expertise, then customize and scale as needs evolve and choose from a variety of foundation models and AI related integrations and experiment rapidly. As GenAI applications number grows, Orkes simplifies operational overhead and allows refining, test and govern prompt templates for AI models and build a library of reusable, versioned LLM tasks your entire organization can use and share. On the governance and compliance side, Orkes provides enterprise features like role-based access control (RBAC), audit logs, and versioning for supported types of applications, including GenAI workflows.

As we continue to explore the vast potential of GenAI, it’s clear that orchestration tools will play an increasingly vital role. Using Orkes Conductor, the ability to effectively manage complex AI workflows will be a key differentiator in the success of GenAI applications.

GenAI success doesn’t stop at smarter models, it takes smarter workflows. With Orkes Conductor and HAQM Bedrock, you can move quickly, build reliably, and scale with confidence.

Connect with Orkes

.


Orkes – AWS Partner Spotlight

Orkes is the modern workflow orchestration platform that allows developers to seamlessly build and run complex workflows, applications, and integrations without the hassle of building infrastructure. Built on the battle tested Conductor open-source software (OSS), Orkes enables users to build durable, long-lasting workflows, quickly modernize applications, and speed up development of mission-critical applications.

Contact Orkes | Partner Overview | AWS Marketplace