Configuration
Customize Hicortex to fit your workflow
Config File
All configuration lives in ~/.hicortex/config.json. Created automatically by npx @gamaze/hicortex init.
{
"mode": "server",
"port": 8787,
"authToken": "your-secret-token",
"licenseKey": "hctx-your-key-here"
}
Defaults work out of the box. Only add config if you want to customize.
LLM Configuration
Hicortex needs an LLM for distillation, scoring, and reflection. It auto-detects your setup in this order:
- Ollama — if running locally, uses it automatically
- Claude CLI — detected from your Claude Code installation
- API keys — from environment variables
- OpenClaw config — when running as OC plugin, uses OC's LLM config
Ollama (Recommended)
Free, local, private. Install Ollama and pull a model:
ollama pull qwen3.5:9b # recommended (6GB, works on most machines)
ollama pull qwen3.5:27b # higher quality if you have 32GB+ RAM
Hicortex auto-detects Ollama and selects the largest available model. Minimum: 9b for distillation quality. With Ollama, all processing happens locally — no memory data ever leaves your machine.
Claude CLI
If you have Claude Code installed, Hicortex can use its built-in Claude CLI (Haiku model) as the LLM backend. Auto-detected — no configuration needed. Uses your existing Claude subscription.
API Keys
Set environment variables for cloud providers:
# Anthropic
export ANTHROPIC_API_KEY=sk-ant-...
# OpenAI
export OPENAI_API_KEY=sk-...
# Google
export GOOGLE_API_KEY=...
Hicortex supports 20+ providers including OpenRouter, z.ai, Groq, DeepSeek, and any OpenAI-compatible endpoint.
Split Model Configuration
Advanced: use different models for different pipeline stages. Useful when you have a fast small model for scoring and a larger model for distillation/reflection.
{
"llmBackend": "ollama",
"llmBaseUrl": "http://localhost:11434",
"llmModel": "qwen3.5:9b",
"distillModel": "qwen3.5:9b",
"reflectModel": "qwen3.5:27b",
"reflectBaseUrl": "http://other-machine:11434",
"reflectProvider": "ollama"
}
| Field | Used for | Description |
|---|---|---|
llmModel | Importance scoring | Fast tier — simple numeric output |
distillModel | Session distillation | Extracts memories from transcripts (9b+ recommended) |
reflectModel | Nightly reflection | Generates lessons from memories (largest model) |
reflectBaseUrl | Reflection endpoint | Optional separate Ollama instance (e.g. remote machine with more RAM) |
Auth Token
The server uses a bearer token for all non-health endpoints. A default token is set automatically during setup. For production or remote access, set a custom token:
{
"authToken": "your-custom-secret-token"
}
Or via environment variable: HICORTEX_AUTH_TOKEN=your-token
/health and localhost connections bypass auth. All other endpoints (MCP tools, /ingest) require the bearer token. Client mode auto-connects using the server's configured token.
License Key
Add your license key to unlock higher tiers:
{
"licenseKey": "hctx-abc123def456"
}
Or tell your agent: "Activate my Hicortex key: hctx-abc123def456"
| Tier | Memories | Machines | Price |
|---|---|---|---|
| Starter (Free) | 250 | Unlimited (trial) | $0 |
| Pro | Unlimited | Single machine | $9/mo |
| Team | Unlimited | Unlimited machines | $29/mo |
All Configuration Options
| Option | Default | Description |
|---|---|---|
mode |
"server" |
Server (local DB) or client (remote server) |
serverUrl |
— | Remote server URL (client mode only) |
port |
8787 |
Local MCP server port (server mode) |
authToken |
auto-generated |
Bearer token for write operations |
licenseKey |
— | License key (free tier works without one) |
llmProvider |
auto-detect | LLM provider override |
llmModel |
auto-detect |
Model for scoring and memory operations |
reflectModel |
same as llmModel | Model for reflection/distillation (can be higher quality) |
Next Steps
- Usage — MCP tools, /learn command, nightly pipeline
- API Reference