The Ollama-first AI coding agent built for the terminal.
Forked from OpenCode to focus exclusively on local Ollama models with tool support.
Woreda is 100% local-first. No cloud APIs, no billing, no authentication. Just you, your powerful machine, and open-source models running locally via Ollama.
Why local-only?
We target Ollama models with tool support - specifically optimized for coding tasks like qwen2.5-coder, deepseek-coder-v2, and llama3.2.
ollama pull qwen2.5-coder # Recommended: Best for code
ollama pull deepseek-coder-v2 # Alternative: Great for reasoning
ollama pull llama3.2 # Alternative: Fast and capable
# From source (for now)
git clone https://github.com/samifouad/woreda.git
cd woreda
bun install
bun run build
bun link
# Use it
woreda spawn
Woreda only shows Ollama models with tool support. These models can execute bash commands, edit files, and use the full agent toolkit.
Recommended:
qwen2.5-coder - Best overall for code generationdeepseek-coder-v2 - Excellent reasoning capabilitiesllama3.2 - Fast and lightweight (3B/1B variants available)Also supported:
llama3.1 - Strong general capabilitiesmistral-nemo - Good balance of speed and capabilitycommand-r - Cohere's code modelfirefunction-v2 - Fireworks' function-calling specialistTo appear in Woreda, a model must:
Switch between agents with the Tab key:
All the standard tools you need:
bash - Execute shell commands (with tree-sitter safety parsing)edit - Intelligent file editing with diffsread - Smart file readingwrite - File creationgrep - Pattern-based code searchglob - File pattern matchingtask - Multi-step task executionmcp - Model Context Protocol supportWoreda keeps full MCP (Model Context Protocol) support. Connect external tools and servers:
// .opencode/opencode.jsonc
{
"mcp": {
"servers": {
"filesystem": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem", "/path/to/allowed/files"]
}
}
}
}
One of Woreda's core focuses is intelligent context window management for local models:
Woreda uses the same config format as OpenCode for compatibility:
// .opencode/opencode.jsonc (or ~/.opencode/opencode.jsonc)
{
"model": "ollama/qwen2.5-coder:latest",
"provider": {
"ollama": {
"api": "http://localhost:11434/v1", // Override if needed
"models": {
"custom-model:latest": {
"name": "My Custom Model",
"tool_call": true // Mark as tool-capable
}
}
}
}
}
OLLAMA_HOST - Override Ollama API location (default: http://localhost:11434)# Start TUI
woreda spawn
# Run a one-off command
woreda run "refactor this function to use async/await"
# Attach to running server
woreda tui attach
# Clone and setup
git clone https://github.com/samifouad/woreda.git
cd woreda
bun install
# Run in dev mode
bun dev
# Type checking
bun typecheck
# Tests
bun test
# Build
bun run build
num_ctx auto-tuningOpenCode is excellent but deliberately multi-provider. It treats Ollama as a second-class citizen requiring manual JSON configuration. Woreda inverts this: Ollama is the only citizen, and we optimize everything around local model performance.
Focus breeds excellence. By targeting only Ollama, we can:
They're great! But they're also well-served by existing tools (including OpenCode, Claude Code, Cursor, etc.). Woreda carves out a niche: local-only, privacy-first, cost-free coding.
Absolutely! MCP is provider-agnostic and works great with local models.
No. That's the whole point. Use OpenCode if you want multi-provider support.
Forked from OpenCode by the SST team. Massive respect for their work building the foundation.
Changes:
MIT (inherited from OpenCode)
The Ollama-first AI coding agent built for the terminal.
Forked from OpenCode to focus exclusively on local Ollama models with tool support.
Woreda is 100% local-first. No cloud APIs, no billing, no authentication. Just you, your powerful machine, and open-source models running locally via Ollama.
Why local-only?
We target Ollama models with tool support - specifically optimized for coding tasks like qwen2.5-coder, deepseek-coder-v2, and llama3.2.
ollama pull qwen2.5-coder # Recommended: Best for code
ollama pull deepseek-coder-v2 # Alternative: Great for reasoning
ollama pull llama3.2 # Alternative: Fast and capable
# From source (for now)
git clone https://github.com/samifouad/woreda.git
cd woreda
bun install
bun run build
bun link
# Use it
woreda spawn
Woreda only shows Ollama models with tool support. These models can execute bash commands, edit files, and use the full agent toolkit.
Recommended:
qwen2.5-coder - Best overall for code generationdeepseek-coder-v2 - Excellent reasoning capabilitiesllama3.2 - Fast and lightweight (3B/1B variants available)Also supported:
llama3.1 - Strong general capabilitiesmistral-nemo - Good balance of speed and capabilitycommand-r - Cohere's code modelfirefunction-v2 - Fireworks' function-calling specialistTo appear in Woreda, a model must:
Switch between agents with the Tab key:
All the standard tools you need:
bash - Execute shell commands (with tree-sitter safety parsing)edit - Intelligent file editing with diffsread - Smart file readingwrite - File creationgrep - Pattern-based code searchglob - File pattern matchingtask - Multi-step task executionmcp - Model Context Protocol supportWoreda keeps full MCP (Model Context Protocol) support. Connect external tools and servers:
// .opencode/opencode.jsonc
{
"mcp": {
"servers": {
"filesystem": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem", "/path/to/allowed/files"]
}
}
}
}
One of Woreda's core focuses is intelligent context window management for local models:
Woreda uses the same config format as OpenCode for compatibility:
// .opencode/opencode.jsonc (or ~/.opencode/opencode.jsonc)
{
"model": "ollama/qwen2.5-coder:latest",
"provider": {
"ollama": {
"api": "http://localhost:11434/v1", // Override if needed
"models": {
"custom-model:latest": {
"name": "My Custom Model",
"tool_call": true // Mark as tool-capable
}
}
}
}
}
OLLAMA_HOST - Override Ollama API location (default: http://localhost:11434)# Start TUI
woreda spawn
# Run a one-off command
woreda run "refactor this function to use async/await"
# Attach to running server
woreda tui attach
# Clone and setup
git clone https://github.com/samifouad/woreda.git
cd woreda
bun install
# Run in dev mode
bun dev
# Type checking
bun typecheck
# Tests
bun test
# Build
bun run build
num_ctx auto-tuningOpenCode is excellent but deliberately multi-provider. It treats Ollama as a second-class citizen requiring manual JSON configuration. Woreda inverts this: Ollama is the only citizen, and we optimize everything around local model performance.
Focus breeds excellence. By targeting only Ollama, we can:
They're great! But they're also well-served by existing tools (including OpenCode, Claude Code, Cursor, etc.). Woreda carves out a niche: local-only, privacy-first, cost-free coding.
Absolutely! MCP is provider-agnostic and works great with local models.
No. That's the whole point. Use OpenCode if you want multi-provider support.
Forked from OpenCode by the SST team. Massive respect for their work building the foundation.
Changes:
MIT (inherited from OpenCode)