Usage
How Hicortex works day-to-day
MCP Tools
Your agent has 6 memory tools available via MCP. These are used automatically — you rarely need to invoke them directly.
| Tool | Description |
|---|---|
hicortex_search |
Semantic search across all memories. Finds relevant context even when wording differs. |
hicortex_context |
Get contextual memories for a specific project or topic. Combines search with linked memories. |
hicortex_ingest |
Store a new memory. Used by the agent when it learns something worth remembering. |
hicortex_lessons |
Retrieve distilled lessons learned. High-value patterns extracted from experience. |
hicortex_update |
Update an existing memory with new information or corrections. |
hicortex_delete |
Remove a memory that is no longer relevant or was stored incorrectly. |
The /learn Command
Tell your agent to remember something explicitly:
/learn Always use UTC timestamps in this project
This triggers hicortex_ingest with your message. Useful for capturing preferences, decisions, or project-specific knowledge on the spot.
Nightly Pipeline
The core of Hicortex. Every night (2 AM by default), the daemon runs an automated pipeline:
1. Auto-Capture
Scans Claude Code session transcripts (~/.claude/projects/) and OpenClaw logs for new sessions since the last run.
2. Distill
Each session is sent through the LLM to extract memories — lessons learned, decisions made, mistakes to avoid, and context worth remembering. Lessons are extracted from both successes and failures.
3. Consolidate
The consolidation pipeline processes all memories:
- Score — importance scoring based on relevance and frequency
- Reflect — identify patterns across related memories
- Link — connect related memories in the knowledge graph
- Prune — decay and remove outdated or low-value memories
4. Inject
Distilled lessons are automatically injected into your agent's context (CLAUDE.md for Claude Code, skill injection for OpenClaw). Your agent starts the next session with accumulated wisdom.
npx @gamaze/hicortex nightly to execute the pipeline immediately without waiting for 2 AM.
Multi-Machine Workflow
With server + client mode, multiple machines share a single memory:
- Server machine:
npx @gamaze/hicortex init— runs the database and MCP server - Client machines:
npx @gamaze/hicortex init --server https://server:8787 - Each client runs its own nightly pipeline locally (using its local LLM)
- Distilled memories are POSTed to the central server via
POST /ingest - All machines query the same shared memory for retrieval
What You Will Notice
Fewer Repetitive Questions
Your agent remembers your preferences, coding style, and context from previous sessions.
Fewer Repeated Mistakes
When your agent makes a mistake and you correct it, Hicortex captures the lesson. The same mistake will not happen again.
Better Context From Day One
Instead of re-explaining your project or preferences every session, your agent already knows.
Compound Learning
The longer you use Hicortex, the smarter your agent becomes. Not just accumulated data — actively refined and scored knowledge.
Monitoring
Check your server's memory state:
curl http://localhost:8787/health
# or
npx @gamaze/hicortex status
Shows memory count, links, DB size, LLM configuration, and daemon status.
Next Steps
- API Reference
- Configuration — Customize LLM, auth, and more
- Upgrading — Unlock unlimited memories