Sage Elliott

Actors: Faster, Cheaper AI Workflows with Stateful Containers

<div style="text-align: center; margin-top: -1.5rem;">Actors allows relaunching tasks almost instantly</div>

AI workflows often use containers as isolated environments per task, but each spin-up requires initializing dependencies, configuring models, and loading data, slowing down workflows that need rapid, repeated executions. The overhead can add time and resource costs, especially in workflows needing rapid, repeated executions.

Actors solve this by maintaining long-running stateful containers that stay ready to handle requests without repeated initialization. These persistent containers enable developers to execute tasks or serve models much more efficiently, reusing the same environment until a defined time-to-live (TTL). This can reduce cold-start time by up to 99% in complex AI workflows.

Actors for Stateful, Reusable AI Workflow Execution

Actors dramatically reduce the cost of cold starts by maintaining long-running stateful environments that stay ready for use until a defined time-to-live (TTL). This persistent setup eliminates redundant initialization and unlocks several key benefits:

1. Cold-Start Time Reduction in AI Workflows

Actors allow tasks to share the same pre-initialized environment, eliminating redundant setup between tasks. A container running one task can be reused for another if their compute and environment requirements align.

2. Near Real-Time Inference

Actors eliminate repeated initialization, unlocking near-instant execution for ML model inference. For example, a model container can stay live to handle ongoing requests, removing the need to reload the model for each prediction.

3. Simplified Stateful Task Execution

Actors retain context while active, streamlining workflows that depend on a persistent state. This is especially useful for:

  • Incremental Data Enrichment: Gradually augmenting datasets with new information.
  • Streaming Data Processing: Continuously processing data in real time.
  • Dynamic Machine Learning Models: Updating models or their parameters without disrupting operations.

Actors simplify these state-dependent tasks, enabling more efficient and responsive workflows.

Enabling Actors for your AI Workflows

To incorporate Actors into your Union AI workflows, start by declaring an Actor environment. The example below demonstrates how to set this up, but be sure to check the documentation for more details and options.

Copied to clipboard!
# Define an Actor environment
actor = ActorEnvironment(
    name="my-actor",          # Name the environment
    container_image=image,    # Specify the container image to use
    replica_count=1,          # Set the number of container replicas
    ttl_seconds=120,          # Time-to-live for the Actor environment
    requests=Resources(       # Resource allocation for the environment
        cpu="2",              # CPU resources
        mem="500Mi"           # Memory resources
    ),
)


# Create a task within the Actor environment
@actor.task
def actor_knn_predict(
    model: torch.nn.Module,
    pred_data: List[List[float]]
) -> List[int]:
    # Use the model to make predictions
    predictions = model.predict(pred_data)
    return predictions.tolist()

Get Started with Actors Today

Actors are available in both Union BYOC (Bring Your Own Compute) and Union Serverless environments, making them versatile for a wide range of use cases. Whether you’re serving AI models, processing streaming data, or running repeated tasks, Actors enable you to build faster, more efficient AI workflows with minimal overhead.

Spend less time waiting and more time innovating. Check out the documentation and start building stateful ML workflows today.

Looking to get started with Union? Book a demo or try out Union Serverless!

Unified AI Platform
Reusable Containers
AI Workflows