# claw-code-free **Repository Path**: lr998/claw-code-free ## Basic Information - **Project Name**: claw-code-free - **Description**: No description available - **Primary Language**: Unknown - **License**: Not specified - **Default Branch**: main - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 0 - **Forks**: 0 - **Created**: 2026-04-05 - **Last Updated**: 2026-04-05 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README # Claw Code — Free Local LLM Edition An agentic coding CLI that works with **free local models** via [Ollama](https://ollama.com/), no API keys required. Cloned from [ultraworkers/claw-code](https://github.com/ultraworkers/claw-code) with modifications to support OpenAI-compatible providers (Ollama, LM Studio, etc.), so you can use free models like Qwen, Llama, Mistral and more. ## What We Changed The original project only supported Anthropic's Claude API. We modified the CLI to use **dynamic provider detection** — when you pass a non-Anthropic model via `--model`, it automatically routes to your local Ollama instance through the OpenAI-compatible API. **Key changes:** - `rust/crates/claw-cli/src/main.rs` — Switched from hardcoded `ClawApiClient` to `ProviderClient` with model-based provider routing - `rust/crates/api/src/client.rs` — Added base URL support for the default auth path ## Quickstart ### Prerequisites 1. Install [Rust](https://rustup.rs/) 2. Install [Ollama](https://ollama.com/) 3. Pull a model: ```bash ollama pull qwen2.5-coder:7b ``` ### Build ```bash cd rust cargo build --release ``` ### Run **PowerShell:** ```powershell $env:ANTHROPIC_API_KEY = "" $env:ANTHROPIC_AUTH_TOKEN = "" $env:OPENAI_API_KEY = "dummy" $env:OPENAI_BASE_URL = "http://localhost:11434/v1" .\target\release\claw.exe --model qwen2.5-coder:7b ``` **Bash / macOS / Linux:** ```bash export ANTHROPIC_API_KEY="" export ANTHROPIC_AUTH_TOKEN="" export OPENAI_API_KEY="dummy" export OPENAI_BASE_URL="http://localhost:11434/v1" ./target/release/claw --model qwen2.5-coder:7b ``` > Make sure Ollama is running first (`ollama serve` or open the Ollama desktop app). ### Recommended Models | Model | Size | Good at | Command | |-------|------|---------|---------| | `qwen2.5-coder:7b` | ~4GB | Coding | `ollama pull qwen2.5-coder:7b` | | `qwen2.5-coder:14b` | ~9GB | Coding (better) | `ollama pull qwen2.5-coder:14b` | | `mistral:7b` | ~4GB | General + tool calling | `ollama pull mistral:7b` | | `llama3.1:8b` | ~5GB | General purpose | `ollama pull llama3.1:8b` | | `llama3.1:70b` | ~40GB | Best quality (needs lots of RAM) | `ollama pull llama3.1:70b` | ### Known Limitations - Small local models (7B-14B) may **not reliably use tools** (file read/write, bash, etc.). They tend to describe what to do rather than actually doing it. For full agentic functionality, larger models (70B+) or cloud APIs work better. - Tool/function calling quality varies by model. Mistral tends to handle it better than others at small sizes. ## Original Project This project is based on [ultraworkers/claw-code](https://github.com/ultraworkers/claw-code), which is a Rust port of the Claw Code agent harness. All credit for the original implementation goes to the upstream authors. ## Disclaimer - This repository does **not** claim ownership of the original Claw Code source material. - This repository is **not affiliated with, endorsed by, or maintained by the original authors**.