MCP Server
The Rekall MCP server exposes 40+ tools through the Model Context Protocol, giving any MCP-compatible AI client full access to Rekall's memory capabilities.
What is MCP?
The Model Context Protocol (MCP) is an open standard that enables AI applications to securely connect to external tools and data sources. MCP provides a universal interface between AI clients (like Claude Desktop or Cursor) and capability servers (like Rekall).
Instead of building custom integrations for every AI tool, Rekall ships a single MCP server that works with any MCP-compatible client. The server runs locally on your machine and communicates over standard I/O, keeping your API keys and memory data secure.
Secure by Default
Runs locally, communicates over stdio. API keys never leave your machine.
Universal Protocol
One server works with Claude Desktop, Cursor, OpenCode, and any MCP client.
40+ Tools
Full coverage of memory CRUD, search, entities, workflows, execution, and more.
Installation
Install the Rekall MCP server globally via npm:
npm install -g @rekall/mcp-server
Verify the installation by checking the version:
rekall-mcp --version
Node.js required
The MCP server requires Node.js 18 or later. Install it from nodejs.org if you don't have it already.
Configuration
The MCP server is configured through your AI client's MCP configuration file. The most common setup is in claude_desktop_config.json:
{"mcpServers": {"rekall": {"command": "rekall-mcp","args": ["--stdio"],"env": {"REKALL_API_KEY": "your-api-key-here"}}}}
Environment Variables
| Variable | Required | Description |
|---|---|---|
REKALL_API_KEY | Yes | Your Rekall API key for authentication |
REKALL_BASE_URL | No | Custom API base URL (default: https://api.rekall.ai) |
REKALL_AGENT_ID | No | Default agent identity for memory operations |
REKALL_CONTEXT | No | Default memory context (e.g., project name) |
REKALL_LOG_LEVEL | No | Logging level: debug, info, warn, error (default: info) |
Tools Overview
The Rekall MCP server provides 40+ tools organized into nine categories. Each tool is automatically available to your AI client once the MCP server is configured.
storageMemory CRUD
Create, read, update, and delete memories of all types.
create_memoryCreate a new memory with type, content, metadata, and context
get_memoryRetrieve a memory by its ID with full metadata
update_memoryUpdate an existing memory's content, metadata, or strength
delete_memorySoft-delete a memory (recoverable) or hard-delete permanently
list_memoriesList memories with filtering by type, context, date range, and tags
bulk_create_memoriesCreate multiple memories in a single batch operation
archive_memoryArchive a memory to reduce its visibility without deletion
searchSearch
Semantic, full-text, and hybrid search across memories.
search_memoriesHybrid search combining semantic similarity and keyword matching
semantic_searchPure vector-based semantic similarity search
fulltext_searchPostgreSQL full-text search with ranking
search_by_entityFind all memories referencing a specific entity
search_by_contextSearch within a specific memory context (agent, user, project)
find_relatedFind memories related to a given memory by content similarity
hubEntities
Manage entities extracted from memories for knowledge graphs.
create_entityCreate a new entity with name, type, and properties
get_entityRetrieve an entity by ID with its relationships
update_entityUpdate entity properties or merge with another entity
delete_entityDelete an entity and optionally its relationships
list_entitiesList entities filtered by type, context, or search query
merge_entitiesMerge duplicate entities, consolidating their relationships
shareRelationships
Define and query relationships between entities.
create_relationshipCreate a typed relationship between two entities
get_relationshipsGet all relationships for a given entity
delete_relationshipRemove a relationship between entities
traverse_graphWalk the entity graph from a starting node with depth control
account_treeWorkflows
Manage procedural memories as structured workflows.
create_workflowCreate a procedural workflow with ordered steps
get_workflowRetrieve a workflow with all steps and metadata
update_workflowModify workflow steps, ordering, or conditions
execute_workflowStart a workflow execution, creating an execution memory
learn_workflowLet the agent learn a new workflow from observed actions
smart_toyAgents
Agent-specific memory operations and context management.
get_agent_contextRetrieve the full memory context for an agent session
set_agent_contextUpdate the active memory context for an agent
switch_contextSwitch between memory contexts (user, project, hive)
get_agent_memoriesList all memories owned by a specific agent
groupsHives
Shared memory spaces for agent teams and collaborative knowledge.
create_hiveCreate a shared memory hive for a team of agents
join_hiveAdd an agent to an existing hive
leave_hiveRemove an agent from a hive
share_to_hiveShare a memory from an agent context to a hive
get_hive_memoriesList memories shared within a hive
play_circleExecution
Manage long-running agent task state with checkpoints.
create_executionStart tracking a new agent execution with initial state
checkpoint_executionSave a checkpoint of current execution state
resume_executionResume an execution from its last checkpoint
complete_executionMark an execution as complete with final results
get_execution_stateRetrieve the current state of a running execution
tunePreferences
Learn and apply user preferences from interactions.
get_preferencesRetrieve learned preferences for a user or agent
set_preferenceExplicitly set a preference key-value pair
infer_preferencesTrigger preference inference from recent interactions
apply_preferencesApply learned preferences to a response or action
Progressive Disclosure (Infinite Context)
These tools enable token-efficient memory retrieval by splitting search into two steps: get a lightweight index first, then load only what you need.
memory_search_indexSearch memories and return lightweight index with IDs, snippets, relevance scores, and token estimates. Uses 5-10x fewer tokens than full search.
memory_get_batchLoad full content for specific memory IDs. Use after memory_search_index to load only the memories you actually need.
memory_observeStore an observation (tool output, file read, API response). Automatically indexed for progressive disclosure retrieval.
memory_session_contextGet injectable session context with recent observations and active contexts. Recommended at session start.
memory_timelineGet chronological context around an observation. Useful for understanding temporal relationships.
MCP Prompts
MCP clients can request built-in guidance prompts: rekall-guide, infinite-context, and memory-workflow. These provide the agent with best practices for memory-efficient retrieval at session start.
Tool Documentation Format
Each MCP tool follows a consistent schema. The AI client receives the tool definition automatically and can invoke it with the correct parameters. Here is an example of what a tool definition looks like:
{"name": "create_memory","description": "Create a new memory with type, content, and metadata","inputSchema": {"type": "object","properties": {"type": {"type": "string","enum": ["episodic", "semantic", "procedural", "long_term","short_term", "execution", "preference"],"description": "The type of memory to create"},"content": {"type": "string","description": "The text content of the memory"},"metadata": {"type": "object","description": "Optional metadata tags and properties"},"context": {"type": "string","description": "Memory context (agent ID, project, or hive)"}},"required": ["type", "content"]}}
Auto-discovery
MCP clients automatically discover all available tools when the server starts. You do not need to manually register tools -- simply configure the server and your AI client will have access to all 40+ tools immediately.
