Local LLMs with Ollama
VoidBox can use local models served by Ollama instead of the Anthropic API. The guest VM reaches Ollama through the platform networking bridge — no Anthropic API key required.
1. Prerequisites
- Install Ollama: ollama.com
- Pull a model:
ollama pull phi4-mini - Ensure Ollama is running:
ollama serve - Build the guest initramfs (see Architecture)
2. How guest-to-host networking works
The host address depends on the backend:
- Linux/KVM:
10.0.2.2:11434 - macOS/VZ NAT:
192.168.64.1:11434
Guest VM Host
┌──────────────┐ ┌──────────────┐
│ claude-code │───────────────>│ Ollama:11434 │
│ │ │ (host) │
└──────────────┘ └──────────────┘
For Claude-shaped providers, VoidBox injects ANTHROPIC_BASE_URL to point at the host Ollama endpoint.
3. Code example
use void_box::agent_box::VoidBox;
use void_box::llm::LlmProvider;
use void_box::skill::Skill;
let model = std::env::var("OLLAMA_MODEL")
.unwrap_or_else(|_| "phi4-mini".into());
let ollama_host = if cfg!(target_os = "macos") {
"http://192.168.64.1:11434"
} else {
"http://10.0.2.2:11434"
};
let agent = VoidBox::new("ollama_demo")
.llm(LlmProvider::ollama_with_host(&model, ollama_host))
.skill(Skill::agent("claude-code"))
.memory_mb(256)
.prompt("Write a Python script that prints the first 10 Fibonacci numbers.")
.build()?;
let result = agent.run(None, None).await?;
4. Running the example
OLLAMA_MODEL=phi4-mini \
VOID_BOX_KERNEL=/boot/vmlinuz-$(uname -r) \
VOID_BOX_INITRAMFS=/tmp/void-box-rootfs.cpio.gz \
cargo run --example ollama_local
5. Environment variables
OLLAMA_MODEL— Ollama model name (e.g.phi4-mini,qwen3-coder)VOID_BOX_KERNEL— path to the host kernel image for KVMVOID_BOX_INITRAMFS— path to the guest initramfs built bybuild_guest_image.shVOIDBOX_LLM_BASE_URL— optional override when you need a non-default host endpoint
Without VOID_BOX_KERNEL set, the example falls back to mock mode (no real VM).
6. Next
See Pipeline Composition to chain Ollama-backed boxes, or define specs with YAML.