Skip to content

Connect OpenClaw to Basebox

"The best way to predict the future is to build it." — Alan Kay

Turn your code editor into an AI-powered development environment! This guide shows you how to connect OpenClaw - an advanced AI coding assistant - to your Basebox instance for autonomous coding workflows.

Why This Matters

Instead of copying code to ChatGPT or Claude, you'll have AI that can: - Write and edit code directly in your files - Run terminal commands and see results - Understand your entire codebase - Work with multiple files simultaneously - Use your private, self-hosted models

Table of Contents

  1. What is OpenClaw?
  2. Prerequisites
  3. Installation
  4. Configuration
  5. Testing the Connection
  6. Usage Examples
  7. Troubleshooting
  8. Next Steps

What is OpenClaw?

OpenClaw is an AI coding assistant that works like having a senior developer pair-programming with you. Unlike simple code completion tools, OpenClaw can:

🤖 Think and Act Independently - Plan multi-step coding tasks - Execute terminal commands - Read and modify multiple files - Debug and fix issues autonomously

🔧 Integrate with Your Workflow - Works in your existing terminal - Understands your project structure - Maintains conversation context - Supports multiple programming languages

🌐 Connect to Any AI Provider - OpenAI, Anthropic, or your own Basebox instance - Switch between different models for different tasks - Keep your code and data private when using Basebox

Perfect for: Code reviews, refactoring, adding features, debugging, documentation, and learning new codebases.


Prerequisites

Before you begin, ensure you have:

  1. Basebox Instance: A running Basebox deployment (self-hosted or cloud)
  2. API Token: Your Basebox API token (see Getting Your API Token)
  3. Basebox URL: Your Basebox API endpoint URL
  4. Supported Model: At least one chat model configured in Basebox (e.g., Claude, GPT, Llama)
  5. Shell Access: Terminal/command-line access on your development machine

Getting Your API Token

  1. Log in to your Basebox instance
  2. Navigate to your user profile or settings
  3. Find the API Tokens or Developer section
  4. Generate a new API token
  5. Copy and save it securely (you'll need it for configuration)

Example token format:

YTIz......ZjYw


Installation

Install OpenClaw using the official installation script:

curl -fsSL https://openclaw.ai/install.sh | bash

This will: - Download the latest OpenClaw binaries - Install CLI tools (openclaw command) - Set up the OpenClaw gateway service - Configure default settings

Verify installation:

openclaw --version

You should see output like:

OpenClaw v1.x.x


Configuration

Step 1: Configure Basebox as a Custom Provider

OpenClaw uses a configuration system to manage LLM providers. You'll add Basebox as a custom provider using the openclaw config set command.

Command Template:

openclaw config set models.providers.custom-provider '{
  "baseUrl": "YOUR_BASEBOX_URL",
  "apiKey": "YOUR_BASEBOX_API_TOKEN",
  "api": "openai-completions",
  "models": [
    {
      "id": "MODEL_ID",
      "name": "MODEL_DISPLAY_NAME",
      "contextWindow": CONTEXT_SIZE,
      "maxTokens": MAX_OUTPUT_TOKENS,
      "input": ["text"]
    }
  ]
}'

Real Example:

Replace the placeholder values with your actual Basebox configuration:

openclaw config set models.providers.custom-provider '{
  "baseUrl": "https://test.bbox-dev.de/v1",
  "apiKey": "YTIz......ZjYw",
  "api": "openai-completions",
  "models": [
    {
      "id": "claude-sonnet-4",
      "name": "Claude Sonnet 4",
      "contextWindow": 200000,
      "maxTokens": 8192,
      "input": ["text"]
    }
  ]
}'

Configuration Parameters Explained:

Parameter Description Example
baseUrl Your Basebox API endpoint URL (without /v1/chat/completions) https://test.bbox-dev.de
apiKey Your Basebox API token YTIz...ZjYw
api API protocol (use openai-completions for Basebox) openai-completions
models[].id Model identifier as configured in Basebox claude-sonnet-4
models[].name Human-readable display name Claude Sonnet 4
models[].contextWindow Maximum context window size in tokens 200000
models[].maxTokens Maximum output tokens per request 8192
models[].input Supported input types ["text"]

Step 2: Restart the Gateway

After updating the configuration, restart the OpenClaw gateway to apply changes:

openclaw gateway restart

You should see output indicating the gateway has restarted successfully.


Testing the Connection

Quick Health Check

Run the OpenClaw doctor command to verify your configuration:

openclaw doctor

Expected Output:

✓ OpenClaw CLI installed
✓ Gateway running
✓ Configuration valid
✓ Custom provider configured
✓ Models available: claude-sonnet-4

If you see any errors, check the Troubleshooting section.

Test with a Simple Query

Send a test message to verify the Basebox connection:

openclaw agent --message "Hello, which model are you?" --agent main

Expected Response:

I'm running on **claude-sonnet-4** (via custom-provider).

You can see this anytime by running `/status` or asking me to show session details.

If you receive a response, congratulations! OpenClaw is successfully integrated with Basebox.


(NOT TESTED !!!) Usage Examples

Example 1: Code Review

openclaw agent --message "Review this Python function for potential bugs" \
  --file src/utils.py \
  --agent main

Example 2: Generate Tests

openclaw agent --message "Generate unit tests for the User class" \
  --file models/user.py \
  --agent main

Example 3: Refactor Code

openclaw agent --message "Refactor this code to use async/await patterns" \
  --file api/handlers.py \
  --agent main

Example 4: Documentation Generation

openclaw agent --message "Add docstrings to all functions in this module" \
  --file services/auth.py \
  --agent main

Example 5: Multi-File Analysis

openclaw agent --message "Analyze the authentication flow across these files" \
  --file auth/login.rs \
  --file auth/session.rs \
  --file auth/middleware.rs \
  --agent main

Example 6: Interactive Session

Start an interactive chat session for iterative development:

openclaw dashboard

Troubleshooting

Gateway Won't Start

Problem: openclaw gateway restart fails or times out.

Solutions: 1. Check if another process is using the gateway port:

lsof -i :8080  # or whatever port OpenClaw uses
2. Check gateway logs:
openclaw gateway logs
3. Restart the gateway service completely:
openclaw gateway stop
openclaw gateway start

Connection Refused / 401 Unauthorized

Problem: Getting connection errors when running agents.

Solutions: 1. Verify your API token is correct: - Check for extra spaces or newlines - Ensure the token hasn't expired - Generate a new token in Basebox if needed

  1. Verify the baseUrl is correct:
  2. Should NOT include /v1/chat/completions
  3. Should use https:// for production
  4. Test manually with curl:
    curl -H "Authorization: Bearer YOUR_TOKEN" \
         https://your-basebox.example.com/v1/models
    

Invalid Configuration

Problem: openclaw doctor shows configuration errors.

Solutions: 1. View current configuration:

openclaw config list
2. Remove invalid configuration:
openclaw config unset models.providers.custom-provider
3. Reconfigure with correct values (see Configuration)

Slow Responses

Problem: Agents take a long time to respond.

Possible Causes: - Large context window causing slow inference - Network latency to Basebox instance - Model not optimized for your hardware

Solutions: 1. Reduce contextWindow in model configuration 2. Use a smaller model for faster responses 3. Check Basebox instance performance and scaling

SSL/TLS Certificate Errors

Problem: Certificate verification failures.

Solutions: 1. For development/testing with self-signed certificates:

# Add to your shell profile
export NODE_TLS_REJECT_UNAUTHORIZED=0  # NOT for production!
2. For production: Ensure valid SSL certificates are configured on Basebox


Next Steps

Advanced Configuration

  • Custom Agents: Create specialized agents for different tasks
  • Prompt Templates: Customize agent behavior with custom system prompts
  • Tool Integration: Add custom tools for database queries, API calls, etc.
  • Multi-Model Workflows: Use different models for different tasks

Integration with IDEs

OpenClaw can integrate with various editors: - VS Code: Via OpenClaw extension - Neovim: Via OpenClaw plugin - Terminal: Direct CLI usage

See the OpenClaw documentation for IDE-specific setup.

Best Practices

  1. Use Specific Models for Specific Tasks:
  2. Large models (Claude Sonnet, GPT-4) for complex refactoring
  3. Smaller models (Llama 3.3) for simple queries
  4. Fast models for auto-completion and suggestions

  5. Optimize Context Windows:

  6. Don't send entire codebases for simple questions
  7. Use focused file selections
  8. Consider context limits when working with large files

  9. Secure Your Tokens:

  10. Never commit API tokens to version control
  11. Use environment variables or secure vaults
  12. Rotate tokens periodically

  13. Monitor Usage:

  14. Track token consumption via Basebox analytics
  15. Set up alerts for unusual activity
  16. Review agent performance regularly

Resources


Have Fun!

You're now ready to leverage the power of self-hosted AI models with OpenClaw. Happy coding! 🚀

Questions or Issues? - Check the Basebox documentation - Review OpenClaw documentation - Ask your Basebox administrator for help with API access


Last updated: 2026-03-25