Architecture
Overview
Void-Box is a composable agent runtime where each agent runs in a hardware-isolated micro-VM. On Linux this uses KVM; on macOS (Apple Silicon) it uses Virtualization.framework (VZ). The core equation is:
VoidBox = Agent(Skills) + Isolation
A VoidBox binds declared skills (MCP servers, CLI tools, procedural knowledge files, reasoning engines) to an isolated execution environment. Boxes compose into pipelines where output flows between stages, each in a fresh VM.
Component Diagram
┌──────────────────────────────────────────────────────────────────┐
│ User / Daemon / CLI │
│ │
│ ┌──────────────────────────────────────────────────────────┐ │
│ │ VoidBox (agent_box.rs) │ │
│ │ name: "analyst" │ │
│ │ prompt: "Analyze AAPL..." │ │
│ │ skills: [claude-code, financial-data.md, market-mcp] │ │
│ │ config: memory=1024MB, vcpus=1, network=true │ │
│ └─────────────────────┬────────────────────────────────────┘ │
│ │ resolve_guest_image() → .build() → .run()
│ ┌─────────────────────▼───────────────────────────────────┐ │
│ │ OCI Client (voidbox-oci/) │ │
│ │ guest image → kernel + initramfs (auto-pull, cached) │ │
│ │ base image → rootfs (pivot_root) │ │
│ │ OCI skills → read-only mounts (/skills/...) │ │
│ │ cache: ~/.voidbox/oci/{blobs,rootfs,guest}/ │ │
│ └─────────────────────┬───────────────────────────────────┘ │
│ │ │
│ ┌─────────────────────▼───────────────────────────────────┐ │
│ │ Sandbox (sandbox/) │ │
│ │ ┌─────────────┐ ┌──────────────┐ │ │
│ │ │ MockSandbox │ │ LocalSandbox │ │ │
│ │ │ (testing) │ │ (KVM / VZ) │ │ │
│ │ └─────────────┘ └──────┬───────┘ │ │
│ └──────────────────────────┼──────────────────────────────┘ │
│ │ │
│ ┌──────────────────────────▼──────────────────────────────┐ │
│ │ MicroVm (vmm/) │ │
│ │ ┌────────┐ ┌────────┐ ┌─────────────┐ ┌──────────────┐ │ │
│ │ │ KVM VM │ │ vCPU │ │ VsockDevice │ │ Guest Net │ │ │
│ │ │ │ │ thread │ │ (AF_VSOCK) │ │ (SLIRP / VZ) │ │ │
│ │ └────────┘ └────────┘ └───────┬─────┘ └───────┬──────┘ │ │
│ │ Linux/KVM: virtio-blk (OCI rootfs) │ │ │
│ │ Host mounts: 9p on KVM, virtiofs on VZ │ │ │
│ │ Linux/KVM only: Seccomp-BPF on VMM thread │ │ │
│ └────────────────────────────────┼───────────────┼────────┘ │
│ │ │ │
└═══════════════════════════════════╪═══════════════╪══════════════┘
Hardware Isolation │ │
│ vsock:1234 │ guest networking
┌───────────────────────────────────▼───────────────▼───────────────┐
│ Guest VM (Linux kernel) │
│ │
│ ┌──────────────────────────────────────────────────────────────┐ │
│ │ guest-agent (PID 1) │ │
│ │ - Authenticates via session secret (kernel cmdline) │ │
│ │ - Reads /etc/voidbox/allowed_commands.json │ │
│ │ - Reads /etc/voidbox/resource_limits.json │ │
│ │ - Applies setrlimit + command allowlist │ │
│ │ - Drops privileges to uid:1000 │ │
│ │ - Listens on vsock port 1234 │ │
│ │ - pivot_root to OCI rootfs (if sandbox.image set) │ │
│ │ - PTY handler: forkpty, up to 4 concurrent sessions │ │
│ └────────────────────────┬─────────────────────────────────────┘ │
│ │ fork+exec (headless) or forkpty (PTY) │
│ ┌────────────────────────▼─────────────────────────────────────┐ │
│ │ runtime CLI (claude-code, codex, or claudio mock) │ │
│ │ Headless: Claude stream-json or Codex exec --json │ │
│ │ Interactive PTY: raw terminal I/O over vsock │ │
│ │ Skills: /workspace/.claude/skills/*.md │ │
│ │ MCP: /workspace/.mcp.json or ~/.codex/config.toml │ │
│ │ OCI skills: /skills/{python,go,...} (read-only mounts) │ │
│ │ LLM: Claude API / local proxies / Codex API │ │
│ └──────────────────────────────────────────────────────────────┘ │
│ │
│ Linux/KVM: 10.0.2.15/24 gw 10.0.2.2 dns 10.0.2.3 │
│ macOS/VZ: DHCP-assigned NAT network │
└───────────────────────────────────────────────────────────────────┘
Data Flow
Single VoidBox execution
1. VoidBox::new("name") User declares skills, prompt, config
│
2. resolve_guest_image() Resolve kernel + initramfs (6-step chain)
│ Pulls from GHCR if no local paths found
│
3. .build() Creates Sandbox (mock or local VM backend: KVM/VZ)
│ Mounts OCI rootfs + skill images if configured
│
4. .run(input, telemetry_buffer) Execution begins
│
├─ provision_security() Write resource limits + allowlist to /etc/voidbox/
├─ provision_skills() Write SKILL.md files to /workspace/.claude/skills/
│ Write MCP discovery to /workspace/.mcp.json
├─ write input Write /workspace/input.json (if piped from previous stage)
│
├─ sandbox.exec_agent_streaming()
│ Send ExecRequest over vsock
│ │
│ [vsock port 1234]
│ │
│ guest-agent receives Validates session secret
│ │ Checks command allowlist
│ │ Applies resource limits (setrlimit)
│ │ Drops privileges (uid:1000)
│ │
│ fork+exec runtime CLI Runs provider-specific JSONL mode
│ │
│ runtime executes Reads skills, calls LLM, uses tools
│ │
│ ExecResponse sent stdout/stderr/exit_code over vsock
│ │
├─ parse runtime output Extract AgentExecResult (tokens, cost, tools)
├─ read output file /workspace/output.json
│
5. StageResult box_name, agent_result, file_output
Pipeline execution
Pipeline::named("analysis", box1)
.pipe(box2) Sequential: box1.output → box2.input
.fan_out(vec![box3, box4]) Parallel: both receive box2.output
.pipe(box5) Sequential: merged [box3, box4] → box5.input
.run()
Stage flow:
box1.run(None, telemetry) → carry_data = output bytes
box2.run(carry_data, telemetry) → carry_data = output bytes
[box3, box4].run(carry, telemetry) → carry_data = JSON array merge
box5.run(carry_data, telemetry) → PipelineResult
For parallel stages (fan_out), each box runs in a separate tokio::task::JoinSet. Their outputs are merged as a JSON array for the next stage.
Interactive shell (voidbox shell)
voidbox shell --mount /project:/workspace:rw --program claude --memory-mb 3024 --vcpus 4 --network
│
├─ Auto-detect provider For `claude` / `claude-code`: claude-personal (OAuth) or claude (API key)
├─ Build shell config kind: sandbox, synthesized from CLI flags
│ (or load a spec and use the fields the shell path consumes)
│
├─ Build Sandbox kernel, initramfs, memory, vcpus, network, mounts
│ ├─ Stage credentials Write discovered OAuth creds into /home/sandbox/.claude/
│ ├─ Write onboarding flag /home/sandbox/.claude.json (skip login screen)
│ └─ Restore from snapshot If --snapshot or --auto-snapshot
│
├─ attach_pty(PtyOpenRequest) Connect vsock, handshake, send PtyOpen
│ │
│ [vsock port 1234]
│ │
│ guest-agent receives Validates allowlist
│ │ Acquires session slot (max 4 concurrent)
│ │ forkpty: child drops to uid:1000
│ │ Interactive mode: no RLIMIT_FSIZE
│ │
│ PtyOpened response Success or error
│ │
├─ RawModeGuard::engage() Host terminal → raw mode
│ │
│ ┌─── I/O loop (two threads) ────────────────────────────┐
│ │ Writer: stdin → PtyData frames → vsock → guest master │
│ │ Reader: guest master → PtyData frames → vsock → stdout│
│ └───────────────────────────────────────────────────────┘
│ │
│ PtyClosed { exit_code } Guest process exited
│ │
├─ drop(RawModeGuard) Restore terminal
├─ sandbox.stop() Stop VM
│
└─ exit(exit_code) Propagate guest exit code
Spec kinds:
| Kind | Agent block | PTY | Use case |
|---|---|---|---|
agent | Required | No (headless exec) | Autonomous task execution |
sandbox | None | Via voidbox shell | Interactive development |
agent + mode: interactive | Required (empty prompt OK) | Yes | Interactive agent with prompt context |
Security guarantees (same as headless exec):
Interactive PTY sessions preserve the full defense-in-depth stack:
- Layer 1: Hardware isolation (KVM/VZ) — separate kernel and memory space
- Layer 2: Seccomp-BPF on the VMM thread for Linux/KVM
- Layer 3: Session secret authentication over vsock
- Layer 4: Command allowlist — only approved binaries can be exec’d via PTY
- Layer 4: Privilege drop to uid:1000 for the PTY child process
- Layer 4: Resource limits (RLIMIT_NOFILE, RLIMIT_NPROC) applied to PTY child
- Layer 5: Backend-specific guest networking controls (Linux/KVM SLIRP controls; macOS/VZ NAT without those host-side filters yet)
The only difference: RLIMIT_FSIZE (max file size) is skipped for interactive
sessions (PtyOpenRequest.interactive = true). Interactive users need to write
files freely (e.g. Claude Code conversation logs exceed 100 MB). Batch exec
retains the 100 MB limit as defense-in-depth.
Wire Protocol
Host and guest communicate over AF_VSOCK (port 1234) using the void-box-protocol crate.
Frame format
┌──────────────┬───────────┬──────────────────┐
│ length (4 B) │ type (1B) │ payload (N bytes)│
└──────────────┴───────────┴──────────────────┘
- length:
u32little-endian, payload size only (excludes the 5-byte header) - type: message type discriminant
- payload: message payload; usually JSON, but Ping, Pong, and
PtyDatause raw bytes
Message types
| Type byte | Direction | Message | Description |
|---|---|---|---|
| 0x01 | host → guest | ExecRequest | Execute a command (program, args, env, timeout) |
| 0x02 | guest → host | ExecResponse | Command result (stdout, stderr, exit_code) |
| 0x03 | host → guest | Ping | Session authentication handshake (raw secret + optional version) |
| 0x04 | guest → host | Pong | Authentication reply with protocol version |
| 0x05 | host → guest | Shutdown | Request guest shutdown |
| 0x06 | host → guest | FileTransfer | Legacy file transfer request |
| 0x07 | guest → host | FileTransferResponse | Legacy file transfer response |
| 0x08 | guest → host | TelemetryData | Guest telemetry batch |
| 0x09 | host → guest | TelemetryAck | Telemetry acknowledgement |
| 0x0A | host → guest | SubscribeTelemetry | Start telemetry stream |
| 0x0B | host → guest | WriteFile | Write file to guest filesystem |
| 0x0C | guest → host | WriteFileResponse | Write file acknowledgement |
| 0x0D | host → guest | MkdirP | Create directory tree |
| 0x0E | guest → host | MkdirPResponse | Mkdir acknowledgement |
| 0x0F | guest → host | ExecOutputChunk | Streaming output chunk (stream, data, seq) |
| 0x10 | host → guest | ExecOutputAck | Flow control ack (optional) |
| 0x11 | both | SnapshotReady | Guest signals readiness for live snapshot |
| 0x12 | host → guest | ReadFile | Read file from guest filesystem |
| 0x13 | guest → host | ReadFileResponse | File contents or error |
| 0x14 | host → guest | FileStat | Stat a guest file path |
| 0x15 | guest → host | FileStatResponse | File metadata (size, mode, mtime) |
| 0x16 | host → guest | PtyOpen | Open interactive PTY session (program, args, env, interactive) |
| 0x17 | guest → host | PtyOpened | PTY open result (success/error) |
| 0x18 | both | PtyData | Raw terminal I/O bytes (not JSON-encoded) |
| 0x19 | host → guest | PtyResize | Terminal window size change (cols, rows) |
| 0x1A | host → guest | PtyClose | Request PTY session close (SIGHUP to child) |
| 0x1B | guest → host | PtyClosed | PTY child exited (exit_code) |
Raw-byte payloads: PtyData uses raw bytes for terminal I/O, and Ping/Pong
also use raw bytes for the authentication handshake and protocol version.
Security
- MAX_MESSAGE_SIZE: 64 MB — prevents OOM from untrusted length fields
- Session secret: 32-byte hex token injected as
voidbox.secret=<hex>in kernel cmdline. The guest-agent reads it from/proc/cmdlineat boot and requires it in every ExecRequest. Without the correct secret, the guest-agent rejects the request. - ExecRequest Debug impl: Redacts environment variables matching
KEY,SECRET,TOKEN,PASSWORDpatterns
Guest Networking
On Linux/KVM, Void-Box uses smoltcp-based usermode networking (SLIRP) — no root, no TAP devices, no bridge configuration.
Guest VM Host
┌─────────────────────┐ ┌──────────────────┐
│ eth0: 10.0.2.15/24 │ │ │
│ gw: 10.0.2.2 │── virtio-net ──────│ SLIRP stack │
│ dns: 10.0.2.3 │ (MMIO) │ (smoltcp) │
└─────────────────────┘ │ │
│ 10.0.2.2 → NAT │
│ → 127.0.0.1 │
└──────────────────┘
- Guest IP:
10.0.2.15/24 - Gateway:
10.0.2.2(mapped to host127.0.0.1) - DNS:
10.0.2.3(forwarded to host resolver) - Outbound TCP/UDP is NATed through the host
- The guest reaches host services (Ollama on
:11434) via10.0.2.2
Networking Security
- Rate limiting on new connections
- Maximum concurrent connection limit
- CIDR deny list (configurable via
ipnet)
On macOS/VZ, networking is provided by VZNATNetworkDeviceAttachment instead. The VM boundary remains the primary isolation control there, but the Linux SLIRP-specific deny list and rate-limit enforcement do not currently apply.
Security Model
Defense in depth
Layer 1: Hardware isolation (KVM / VZ)
└─ Separate kernel, memory space, devices per VM
Layer 2: Seccomp-BPF (Linux/KVM)
└─ VMM thread restricted to KVM ioctls + vsock + networking syscalls
Layer 3: Session authentication (vsock)
└─ 32-byte random secret, per-VM, injected at boot
Layer 4: Guest hardening (guest-agent)
├─ Command allowlist (only approved binaries execute)
├─ Resource limits via setrlimit (memory, files, processes)
├─ Privilege drop to uid:1000 for child processes
└─ Timeout watchdog with SIGKILL
Layer 5: Network isolation
├─ Linux/KVM: SLIRP rate limiting, connection caps, CIDR deny list
└─ macOS/VZ: NAT networking, without those host-side controls yet
Session secret flow
Host Guest
│ │
├─ getrandom(32 bytes) │
├─ hex-encode → kernel cmdline │
│ voidbox.secret=abc123... │
│ │
│ boot │
│ ─────────────────────────────────────>│
│ ├─ parse /proc/cmdline
│ ├─ store in OnceLock
│ │
├─ ExecRequest { secret: "abc123..." } │
│ ─────────────────────────────────────>│
│ ├─ verify secret
│ ├─ execute if match
│ <─────────────────────────────────────┤
│ ExecResponse { ... } │
Observability
Trace structure
Pipeline span
└─ Stage 1 span (box_name="data_analyst")
├─ tool_call event: Read("input.json")
├─ tool_call event: Bash("curl ...")
└─ attributes: tokens_in, tokens_out, cost_usd, model
└─ Stage 2 span (box_name="quant_analyst")
└─ ...
Guest telemetry
The guest-agent periodically reads /proc/stat, /proc/meminfo and sends TelemetryBatch messages over vsock. The host-side TelemetryAggregator ingests these and exports as OTLP metrics.
Configuration
| Env var | Description |
|---|---|
VOIDBOX_OTLP_ENDPOINT | OTLP gRPC endpoint (e.g. http://localhost:4317) |
VOIDBOX_SERVICE_NAME | Service name for traces (default: void-box) |
Enable at compile time: cargo build --features opentelemetry
OCI Image Support
VoidBox uses OCI container images at three levels, all cached at ~/.voidbox/oci/.
Guest image (sandbox.guest_image)
Pre-built kernel + initramfs distributed as a FROM scratch OCI image containing two files: vmlinuz and rootfs.cpio.gz. Void-Box uses installed artifacts when available and otherwise auto-pulls from GHCR.
Resolution order:
1. sandbox.kernel / sandbox.initramfs (explicit paths in spec)
2. VOID_BOX_KERNEL / VOID_BOX_INITRAMFS (env vars)
3. installed artifacts (/usr/lib/voidbox/, package-manager paths)
4. sandbox.guest_image (explicit OCI ref)
5. ghcr.io/the-void-ia/voidbox-guest:v{version} (default auto-pull)
6. None → mock fallback (mode: auto)
Cache layout: ~/.voidbox/oci/guest/<sha256>/vmlinuz + rootfs.cpio.gz + <sha256>.done marker.
Base image (sandbox.image)
Full container image (e.g. python:3.12-slim) used as the guest root filesystem.
- Linux/KVM: host builds a cached ext4 disk artifact from the extracted OCI rootfs and attaches it as
virtio-blk(guest sees/dev/vda). - macOS/VZ: rootfs remains directory-mounted (virtiofs path).
- Guest-agent switches root with overlayfs +
pivot_root(or secure switch-root fallback when kernel returnsEINVALfor initramfs root).
Security properties are preserved across both paths:
- OCI root switch is driven only by kernel cmdline flags set by the trusted host.
- command allowlist + authenticated vsock control channel still gate execution.
- writable layer is tmpfs-backed; base OCI lowerdir remains read-only.
Cache layout: ~/.voidbox/oci/rootfs/<sha256>/ (full layer extraction with whiteout handling).
OCI skills
Container images mounted read-only at arbitrary guest paths (e.g. /skills/python). Each skill image is pulled, extracted, and mounted independently — no sandbox.image required. Declared in the spec:
skills:
- image: "python:3.12-slim"
mount: "/skills/python"
- image: "golang:1.23-alpine"
mount: "/skills/go"
OCI client internals (voidbox-oci/)
| Module | Purpose |
|---|---|
registry.rs | OCI Distribution HTTP client (anonymous + bearer auth, HTTP for localhost) |
manifest.rs | Manifest / image index parsing, platform selection |
cache.rs | Content-addressed blob cache + rootfs/guest done markers |
unpack.rs | Layer extraction (full rootfs with whiteouts, or selective guest file extraction) |
lib.rs | OciClient: pull(), resolve_rootfs(), resolve_guest_files() |
Snapshots
VoidBox supports three types of VM snapshots for sub-second restore. All snapshot features are explicit opt-in only — no snapshot code runs unless the user declares a snapshot path.
Snapshot types
| Type | When created | Contents | Use case |
|---|---|---|---|
| Base | After cold boot, VM stopped | Full memory dump + all KVM state | Golden image for repeated boots |
| Diff | After dirty tracking enabled, VM stopped | Only modified pages since base | Layered caching (base + delta) |
Performance
Measured on Linux/KVM with 256 MB RAM, 1 vCPU, userspace virtio-vsock:
| Phase | Time | Notes |
|---|---|---|
| Cold boot | ~10 ms | |
| Base snapshot | ~420 ms | Full 256 MB memory dump |
| Base restore | ~1.3 ms | COW mmap, lazy page loading |
| Diff snapshot | ~270 ms | Only dirty pages (~1.5 MB, 0.6% of RAM) |
| Diff restore | ~3 ms | Base COW mmap + dirty page overlay |
| Base speedup | ~8x | Cold boot / base restore |
| Diff savings | 99.4% | Memory file size reduction |
Storage layout
~/.void-box/snapshots/
└── <hash-prefix>/ # first 16 chars of config hash
├── state.bin # bincode: VmSnapshot (vCPU regs, irqchip, PIT, vsock, config)
├── memory.mem # full memory dump (base)
└── memory.diff # dirty pages only (diff snapshots)
Restore flow
1. VmSnapshot::load(dir) Read state.bin (vCPU, irqchip, PIT, vsock, config)
2. Vm::new(memory_mb) Create KVM VM with matching memory size
3. restore_memory(mem, path) COW mmap(MAP_PRIVATE|MAP_FIXED) — lazy page loading
4. vm.restore_irqchip(state) Restore PIC master/slave + IOAPIC
5. VirtioVsockMmio::restore() Restore vsock device registers (userspace backend)
6. create_vcpu_restored(state) Per-vCPU restore (see register restore order below)
7. vCPU threads resume Guest continues execution from snapshot point
Memory restore uses kernel MAP_PRIVATE lazy page loading — pages are demand-faulted from the file, writes create anonymous copies. No userfaultfd required.
vCPU register restore order
The restore sequence in cpu.rs is order-sensitive. Getting it wrong causes
silent guest crashes (kernel panic → reboot via port 0x64).
1. MSRs KVM_SET_MSRS
2. sregs KVM_SET_SREGS (segment regs, CR0/CR3/CR4)
3. LAPIC KVM_SET_LAPIC + periodic timer bootstrap (see below)
4. vcpu_events KVM_SET_VCPU_EVENTS (exception/interrupt state)
5. XCRs (XCR0) KVM_SET_XCRS — MUST come before xsave
6. xsave (FPU/SSE) KVM_SET_XSAVE — depends on XCR0 for feature mask
7. regs KVM_SET_REGS (GP registers, RIP, RFLAGS)
XCR0 restore is critical. XCR0 controls which XSAVE features (x87, SSE,
AVX) are active. Without it, the guest’s XRSTORS instruction triggers a #GP
because the default XCR0 only enables x87, but the guest’s XSAVE area
references SSE/AVX features. This manifests as “Bad FPU state detected at
restore_fpregs_from_fpstate” → kernel panic → reboot loop.
LAPIC timer bootstrap
When the guest was idle (NO_HZ) at snapshot time, the LAPIC timer is masked with vector=0 (LVTT=0x10000). After restore, no timer interrupt ever fires, so the scheduler never runs. The restore code detects this state and bootstraps a periodic LAPIC timer (mode=periodic, vector=0xEC, TMICT=0x200000, TDCR=divide-by-1) to kick the scheduler back to life.
Vsock backend for snapshot
The userspace virtio-vsock backend must be used for VMs that will be
snapshotted. The kernel vhost backend (/dev/vhost-vsock) does not expose
internal vring indices, making queue state capture incomplete. The userspace
backend tracks last_avail_idx/last_used_idx directly, ensuring clean
snapshot/restore of the virtqueue state.
CID preservation
The snapshot stores the VM’s actual CID (assigned at cold boot). On restore,
the same CID is reused — the guest kernel caches the CID during virtio-vsock
probe and silently drops packets with mismatched dst_cid.
Opt-in plumbing
Every layer has an optional snapshot field that defaults to None:
| Layer | Field | Type | Default |
|---|---|---|---|
SandboxBuilder | .snapshot(path) | Option<PathBuf> | None |
BoxConfig | snapshot | Option<PathBuf> | None |
SandboxSpec (YAML) | sandbox.snapshot | Option<String> | None |
BoxSandboxOverride | sandbox.snapshot | Option<String> | None |
CreateRunRequest (API) | snapshot | Option<String> | None |
Resolution chain: per-box override → top-level spec → None (cold boot).
Snapshot resolution
When a snapshot string is provided, the runtime resolves it as:
- Hash prefix →
~/.void-box/snapshots/<prefix>/(ifstate.binexists) - Literal path → treat as directory path (if
state.binexists) - Neither → warning printed, cold boot
No env var fallback, no auto-detection.
Cache management
- LRU eviction:
evict_lru(max_bytes)removes oldest snapshots first - Layer hashing:
compute_layer_hash(base, layer, content)for deterministic cache keys - Listing:
list_snapshots()/voidbox snapshot list - Deletion:
delete_snapshot(prefix)/voidbox snapshot delete <prefix>
Security considerations
Snapshot cloning shares identical VM state across restored instances:
- RNG entropy: Restored VMs inherit the same
/dev/urandompool. Treat snapshot clones as cloned execution state: use short-lived tasks, avoid assuming fresh entropy after restore, and rebuild snapshots when that matters. - ASLR: Clones share guest page table layout. Mitigated by: short-lived tasks, no direct network addressability (SLIRP NAT), command allowlist limiting attack surface
- Session isolation: Restored VMs reuse the snapshot’s stored session secret for vsock authentication (the secret is baked into the guest’s kernel cmdline in snapshot memory). Per-restore secret rotation would require guest-side support
Developer Notes
For contributor setup, lint/test parity commands, and script usage, see
CONTRIBUTING.md.
For runtime setup commands and end-user usage examples, see README.md.
Skill Types
| Type | Constructor | Provisioned as | Example |
|---|---|---|---|
| Agent | Skill::agent("claude-code") | Reasoning engine designation | The LLM itself |
| File | Skill::file("path/to/SKILL.md") | /workspace/.claude/skills/{name}.md | Domain methodology |
| Remote | Skill::remote("owner/repo/skill") | Fetched from GitHub, written to skills/ | obra/superpowers/brainstorming |
| MCP | Skill::mcp("server-name") | Entry in /workspace/.mcp.json and, for Codex, ~/.codex/config.toml | Structured tool server |
| CLI | Skill::cli("jq") | Expected in guest initramfs | Binary tool |