Contributing
Development Setup
Everything you need to clone, build, test, and contribute to acton-ai.
Prerequisites
Rust toolchain
Install Rust via rustup:
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
Acton-ai targets Rust edition 2021. Any stable toolchain that supports edition 2021 will work. Verify your installation:
rustc --version
cargo --version
System dependencies
Linux (Ubuntu / Debian):
sudo apt-get update
sudo apt-get install -y build-essential pkg-config libssl-dev
macOS:
Xcode Command Line Tools are sufficient:
xcode-select --install
Sandbox testing
The tool sandbox is a cross-platform process sandbox. No hypervisor is required. Sandboxed tool calls re-exec the current binary as a child process, apply rlimits and a wall-clock timeout, and on Linux additionally install a best-effort landlock + seccomp filter before running the tool.
The hardening layer is gated by the sandbox-hardening Cargo feature, which is enabled by default on Linux and compiled out on other platforms. Sandbox tests build and run on Linux (x86_64 + aarch64), macOS (Intel + Apple Silicon), and Windows x86_64 without any extra system dependencies.
Running sandbox tests without hardening
To test the rlimits-only path (useful when reproducing behavior on older Linux kernels), build with --no-default-features. The sandbox still enforces resource limits and timeouts; only the landlock/seccomp layer is skipped.
Optional: Ollama for integration testing
Many examples and integration tests use Ollama for local LLM access:
# Install Ollama
curl -fsSL https://ollama.com/install.sh | sh
# Pull a small model for testing
ollama pull qwen2.5:7b
Cloning the repository
git clone https://github.com/Govcraft/acton-ai.git
cd acton-ai
The repository is a single-crate Rust project (no workspace). The main source lives under src/, with examples in examples/ and documentation in docs/.
Building the project
Standard build
cargo build
Build with all features
cargo build --all-features
The only optional feature flag is sandbox-hardening (on by default), which enables Linux landlock + seccomp filters for the process sandbox. See the Installation page for details.
Build the documentation
cargo doc --open
This generates rustdoc output and opens it in your browser.
Running tests
Run the full test suite
cargo test
This runs all unit tests embedded in source files and any integration tests. The crate has extensive unit tests co-located with each module.
Run tests for a specific module
cargo test --lib kernel
cargo test --lib llm
cargo test --lib tools
cargo test --lib memory
cargo test --lib error
Run tests with output visible
cargo test -- --nocapture
Run a specific test by name
cargo test builder_ollama_sets_provider
Running examples
The examples/ directory contains runnable examples demonstrating various features:
| Example | Description |
|---|---|
ollama_chat | Basic chat with Ollama |
ollama_chat_advanced | Advanced chat with streaming and system prompts |
ollama_tools | Tool usage with Ollama |
conversation | Multi-turn conversation management |
multi_provider | Using multiple LLM providers |
multi_agent | Multi-agent collaboration |
per_agent_tools | Per-agent tool configuration |
process_sandbox | Process-sandboxed bash execution |
agent_skills | Agent skills system — loading and activating .md skills |
Run an example:
cargo run --example ollama_chat
cargo run --example agent_skills
LLM provider required
Most examples require a running Ollama instance or an API key for Anthropic/OpenAI. Check the example source for provider configuration details.
Development workflow tips
Clippy
Always run Clippy before submitting changes:
cargo clippy --all-targets --all-features
See the Code Standards page for Clippy configuration details and known warnings.
Formatting
Format your code with rustfmt:
cargo fmt
Check formatting without modifying files:
cargo fmt -- --check
Watch mode
For rapid iteration, use cargo-watch to automatically rebuild on file changes:
cargo install cargo-watch
cargo watch -x check
Or run tests on every save:
cargo watch -x test
Logging during development
Acton-ai uses tracing for structured logging. Set the RUST_LOG environment variable to control log output:
# See debug output from acton-ai
RUST_LOG=acton_ai=debug cargo run --example ollama_chat
# See trace-level output for a specific module
RUST_LOG=acton_ai::llm=trace cargo run --example ollama_chat
# See all logs at info level
RUST_LOG=info cargo test
Useful cargo commands
# Check for compilation errors without building (faster)
cargo check
# Check with all features enabled
cargo check --all-features
# Build in release mode (for benchmarking)
cargo build --release
# View the dependency tree
cargo tree
# Check for outdated dependencies
cargo install cargo-outdated
cargo outdated
Project structure
acton-ai/
src/
lib.rs # Crate root, module declarations, prelude
facade.rs # High-level ActonAI facade (builder pattern)
kernel/ # Kernel actor (central supervisor)
agent/ # Agent actor (individual AI agents)
llm/ # LLM provider actor and API clients
tools/ # Tool registry, executors, builtins, sandbox
memory/ # Persistence and context window management
conversation.rs # Actor-backed Conversation handle
prompt.rs # Fluent PromptBuilder API
stream.rs # Stream handling traits
messages.rs # Actor message definitions
types.rs # Core type aliases (AgentId, CorrelationId, etc.)
error.rs # Error type hierarchy
config.rs # Configuration file loading
examples/ # Runnable example programs
docs/ # Documentation site (Next.js + Markdoc)
Cargo.toml # Package manifest
For a deeper look at the module responsibilities and how they interact, see the Architecture Overview page.
Next steps
- Architecture Overview -- understand the actor hierarchy and message flow
- Code Standards -- coding conventions, error handling patterns, and PR process
- Two API Levels -- understand the high-level facade vs. low-level actor API