Orchestrate Your AI

Bring engineering, ops and data science teams together to create AI products efficiently with lower cost and faster time to market.

Flyte, supercharged

All of the ML workflow features you’ve come to love with Flyte™, by the team that built Flyte, but optimized for better execution, governance, and productivity so that your teams spend less time debugging and more time innovating.

Accelerate Experimentation

Unify work streams

Integrated compute and orchestration delivers an end-to-end platform that accelerates AI product development with managed ops and flexible deployment.

Maximize AI ROI

Ship faster and more efficiently, reducing costs without sacrificing innovation. Execute on a trusted foundation that includes role-based accounts, SOC-2 compliance and enterprise-grade observability and tracking.

Powerful DAGs, observability & cost-efficient engineering

Modern orchestration built for AI scale

Python-driven experience

We take care of running your compute efficiently, so developers and data scientists can define, manage, and execute complex workflows using familiar Python constructs and libraries.

Local development, rapid scale

Build locally with a modern, modular architecture that reinforces software development best practices, then seamlessly transition to remote execution

Declarative infrastructure

Express your requirements and leave the infrastructure provisioning, configuring, and scaling to us. Leverage Ray, Spark, Dask, distributed training all through a single platform.

Enhanced performance

Boost performance with faster file reads, full workflow caching, and an optimized engine that has been fine-tuned and optimized for faster executions.

Work with any GPU type

Adapt to diverse computing needs with an array of GPU types (Nvidia, TPU & other accelerators) tailored to your needs to maximize performance and minimize costs.

Secure multi-cloud

Confidently run your AI and data workflows across different cloud providers while maintaining high data protection standards and compliance with full data lineage, versioning, caching, observability, and reproducibility.

The Union orchestration partner network

Available now on the AWS Marketplace

Available now on the AWS Marketplace

Read announcement
Available soon on the GCP Marketplace

Available now on the GCP Marketplace

Read announcement
Member of the Nvidia Inception Program

Member of the Nvidia Inception Program

Read announcement

Union powers the most advanced AI Apps

“We want to simplify and not have to think about and manage different technology stacks. We want to write everything in a Union workflow and have one platform for orchestrating these jobs; that’s awesome and less stuff for us to worry about.”

T
Thomas Busath
ML Engineer at Porch

“Our products are not powered by a single model but instead are a composite of many models in a larger inference pipeline. What we serve are AI pipelines, which are made of functions, some of which are AI models. Union is ideal for such inference pipelines.”

R
Reda Oulbacha
ML Developer at Artera AI

“Before leveraging the accelerated datasets solution, we were opting to build the index on the fly (downloading the source data and building the minhash table from it) to minimize the data transfer at the expense of needing tons of RAM for each pod. With the persistent storage option, we are able to store the pre-built indices and also reduce the RAM requirement for each worker. The gains reduced the task execution time by roughly 50% and the RAM requirements by 75%, effectively quadrupling our throughput on the same node pool.”

B
Brian O’Donovan
Sr Director of Bioinformatics & Computational Biology at Delve Bio