Agentic AI on the actor model.

Build intelligent AI agents with message-passing actors. Async, resilient, and composable.

acton-ai.toml
Cargo.toml
[default]
provider = "anthropic"
model = "claude-sonnet-4-20250514"
[providers.anthropic]
api_key_env = "ANTHROPIC_API_KEY"

Introduction

Getting started

An agentic AI framework where each agent is an actor, built in Rust.

Acton AI combines the actor model with large language models to give you concurrent, fault-isolated AI agents with a simple, ergonomic API. Connect to Anthropic, OpenAI, or local models through Ollama -- all with streaming responses, built-in tool use, multi-turn conversations, and hardware-sandboxed code execution.

Installation

Add acton-ai to your project and configure your first LLM provider.

Core Concepts

Understand the actor model, kernels, agents, and how they fit together.

API Reference

Explore ActonAI, PromptBuilder, Conversation, and built-in tools.

Guides

Learn about tool use, streaming, conversations, and multi-agent collaboration.


Quick start

Here is the fastest way to go from zero to a working AI prompt. This example uses Ollama running locally, so no API key is needed.

1. Add the dependency

cargo add acton-ai

2. Create a config file

Save this as acton-ai.toml in your project root:

default_provider = "ollama"

[providers.ollama]
type = "ollama"
model = "qwen2.5:7b"
base_url = "http://localhost:11434/v1"

3. Write your first prompt

use acton_ai::prelude::*;
use std::io::Write;

#[tokio::main]
async fn main() -> Result<(), ActonAIError> {
    // Load provider settings from acton-ai.toml
    let runtime = ActonAI::builder()
        .app_name("hello-acton")
        .from_config()?
        .launch()
        .await?;

    // Send a prompt and stream tokens as they arrive
    let response = runtime
        .prompt("What is the capital of France? Answer in one sentence.")
        .system("You are a helpful assistant. Be concise.")
        .on_token(|token| {
            print!("{token}");
            std::io::stdout().flush().ok();
        })
        .collect()
        .await?;

    println!();
    println!("[{} tokens, {:?}]", response.token_count, response.stop_reason);

    runtime.shutdown().await?;
    Ok(())
}

Run it:

cargo run

No config file? No problem.

You can skip the config file entirely and configure the provider in code:

let runtime = ActonAI::builder()
    .app_name("hello-acton")
    .ollama("qwen2.5:7b")
    .launch()
    .await?;

See Installation for all the ways to configure providers.


Key features

Multi-provider LLM support

Connect to Anthropic Claude, OpenAI GPT, Ollama, or any OpenAI-compatible endpoint. Register multiple providers and switch between them per-prompt:

let runtime = ActonAI::builder()
    .app_name("multi-provider")
    .provider_named("claude", ProviderConfig::anthropic("sk-ant-..."))
    .provider_named("local", ProviderConfig::ollama("qwen2.5:7b"))
    .default_provider("local")
    .launch()
    .await?;

// Uses the default provider (local Ollama)
runtime.prompt("Quick question").collect().await?;

// Uses Claude for this specific prompt
runtime.prompt("Complex reasoning task").provider("claude").collect().await?;

Streaming responses

Every prompt streams tokens by default. Attach a callback to process them as they arrive:

runtime
    .prompt("Explain the actor model")
    .on_token(|token| print!("{token}"))
    .collect()
    .await?;

Built-in tools

Give your agents the ability to read files, run shell commands, search with glob and grep, fetch URLs, and more -- all with a single method call:

let runtime = ActonAI::builder()
    .app_name("tool-user")
    .from_config()?
    .with_builtins()   // enables read_file, bash, glob, grep, etc.
    .launch()
    .await?;

runtime
    .prompt("List all Rust source files in this project and count the lines")
    .collect()
    .await?;

Multi-turn conversations

The Conversation API handles history management automatically. Each call to send() appends both the user message and the assistant response to the conversation history:

let conv = runtime.conversation()
    .system("You are a helpful Rust tutor.")
    .build()
    .await;

let r1 = conv.send("What is ownership in Rust?").await?;
println!("{}", r1.text);

// The conversation remembers the previous exchange
let r2 = conv.send("How does borrowing relate to that?").await?;
println!("{}", r2.text);

Or launch an interactive terminal chat in five lines:

ActonAI::builder()
    .app_name("chat")
    .from_config()?
    .with_builtins()
    .launch()
    .await?
    .conversation()
    .run_chat()
    .await

Sandboxed tool execution

Sandboxed tool calls run in a subprocess with rlimits and a wall-clock timeout. On Linux (kernel 5.13+) the child additionally installs a best-effort landlock + seccomp filter before running the tool. No hypervisor required; works on Linux, macOS, and Windows.

let runtime = ActonAI::builder()
    .app_name("sandboxed")
    .from_config()?
    .with_builtins()
    .with_process_sandbox()   // Isolate sandboxed tools in a subprocess
    .launch()
    .await?;

Next steps