Best Practices
Practical guidelines for getting the most out of Rekall: memory hygiene, choosing the right memory type, performance optimization, security, data organization, naming conventions, and scaling patterns.
Overview
Rekall is most effective when memories are well-organized, properly typed, and maintained over time. These best practices are drawn from production deployments across teams of all sizes.
Memory Hygiene
Writing Quality Memories
The quality of your memories directly affects search quality. Follow these guidelines for content that is easy to find and useful when recalled.
// BAD: Vague, no contextawait rekall.memories.create({type: 'episodic',content: 'Fixed the bug',});// GOOD: Specific, searchable, contextualawait rekall.memories.create({type: 'episodic',content: 'Fixed race condition in payments-service checkout flow. The issue was concurrent balance checks without a mutex lock. Applied optimistic locking with a version column on the accounts table.',metadata: {tags: ['bug-fix', 'payments', 'concurrency', 'database'],source: 'debugging-session',project: 'payments-service',file: 'src/checkout/processor.ts',},importance: 0.85,});// BAD: Too much detail (entire file contents)await rekall.memories.create({type: 'episodic',content: entireFileContents, // 500+ lines});// GOOD: Summary with key detailsawait rekall.memories.create({type: 'episodic',content: 'Refactored src/checkout/processor.ts to use optimistic locking pattern. Key changes: added version column to accounts table, wrapped balance check + deduction in a retry loop with version check, added 3 unit tests for concurrent scenarios.',metadata: {tags: ['refactoring', 'payments', 'concurrency'],filesModified: ['src/checkout/processor.ts', 'src/models/account.ts'],testsAdded: 3,},});
Content length sweet spot
Aim for 1-3 sentences per memory. Long enough to be searchable and provide context, short enough to be quickly scanned. If you need more detail, create multiple related memories or add structured metadata.
Tagging Strategy
Consistent tags make filtering powerful. Establish a tagging convention for your team.
// Use hierarchical tags for easy filteringconst tagConventions = {// Activity typeactivity: ['bug-fix', 'feature', 'refactoring', 'research', 'debugging', 'review', 'deployment'],// Project/service nameproject: ['payments-service', 'auth-service', 'web-app', 'api-gateway'],// Technologytech: ['typescript', 'postgresql', 'redis', 'docker', 'kubernetes'],// Priority/importancepriority: ['critical', 'high', 'medium', 'low'],// Sourcesource: ['conversation', 'debugging-session', 'code-review', 'research', 'meeting'],};// Apply consistentlyawait rekall.memories.create({type: 'episodic',content: 'Discovered Redis memory leak in auth-service session store',metadata: {tags: ['bug-fix', 'auth-service', 'redis', 'critical', 'debugging-session'],},});
Regular Cleanup
Periodically review and clean up outdated or low-value memories.
// Find low-importance, old memoriesconst stale = await rekall.memories.search({query: '*',type: 'episodic',filters: {createdBefore: new Date(Date.now() - 90 * 24 * 60 * 60 * 1000).toISOString(),maxImportance: 0.3,},limit: 100,});console.log(`Found ${stale.memories.length} candidates for cleanup`);// Review before deletingfor (const memory of stale.memories) {console.log(`[${memory.importance}] ${memory.content.substring(0, 80)}...`);}// Batch delete after reviewawait rekall.memories.deleteMany({ids: stale.memories.map(m => m.id),});// Or let Rekall handle it automatically with decay settingsawait rekall.config.update({decay: {enabled: true,minImportanceToRetain: 0.2,decayRate: 0.01, // Per dayconsolidationThreshold: 30, // Days before consolidation},});
Choosing the Right Memory Type
Decision Guide
| You want to record... | Use | Example |
|---|---|---|
| What happened | Episodic | Debugging session, conversation, decision |
| What something is | Semantic | Project structure, team org, API relationships |
| How to do something | Procedural | Deploy process, review checklist, onboarding |
| Consolidated knowledge | Long-Term | Auto-consolidated from high-importance episodic |
| Current session context | Short-Term | Current task, open files, active conversation |
| Task progress/state | Execution | Migration progress, deployment state, checkpoints |
| User preferences | Preferences | Coding style, communication tone, tool choices |
When in doubt, use Episodic
Episodic memory is the most flexible type. If you are not sure which type to use, start with episodic. Rekall can automatically extract entities (semantic), detect patterns (procedural), and consolidate important events (long-term) from episodic data.
Performance Tips
Search Optimization
// BAD: Broad query with no filtersconst results = await rekall.memories.search({query: 'things',limit: 1000,});// GOOD: Specific query with filters to narrow resultsconst results = await rekall.memories.search({query: 'authentication module JWT implementation',type: 'episodic',filters: {tags: ['auth'],createdAfter: '2025-01-01T00:00:00Z',minImportance: 0.5,},limit: 10,});// GOOD: Use metadata filters for precise lookupsconst results = await rekall.memories.search({query: 'deployment',filters: {metadata: {project: 'payments-service',environment: 'production',},},limit: 5,});// TIP: Limit results to what you actually need// Fetching 10 results is much faster than fetching 1000
Batch Operations
// BAD: Creating memories one at a time in a loopfor (const event of events) {await rekall.memories.create({ type: 'episodic', content: event });}// GOOD: Use batch create for multiple memoriesawait rekall.memories.createMany(events.map(event => ({type: 'episodic' as const,content: event.description,metadata: { tags: event.tags },importance: event.importance,})));// GOOD: Use batch search for multiple queriesconst results = await rekall.memories.searchMany([{ query: 'authentication issues', type: 'episodic', limit: 5 },{ query: 'deployment history', type: 'episodic', limit: 5 },{ query: 'team decisions', type: 'episodic', limit: 5 },]);
Security Considerations
API Key Management
// NEVER: Hardcode API keysconst rekall = new Rekall({ apiKey: 'rk_live_abc123' }); // DON'T DO THIS// GOOD: Use environment variablesconst rekall = new Rekall({ apiKey: process.env.REKALL_API_KEY! });// GOOD: Use scoped keys for different purposes// Full access key (keep secure, use for admin operations)const adminKey = 'rk_live_full_access';// Read-only key (safe to use in client-facing code)const readOnlyKey = 'rk_live_readonly';// Agent-specific key (limited to agent operations)const agentKey = 'rk_live_agent_abc';// GOOD: Rotate keys regularly// Use the Rekall dashboard or API to create new keys// and deprecate old ones with a grace period
Never expose live keys
Never commit API keys to version control, embed them in client-side code, or share them in chat. Use environment variables, secret managers, or the sandbox mode for testing.
Handling Sensitive Data
// NEVER: Store credentials, tokens, or PII in memory contentawait rekall.memories.create({content: 'Database password is hunter2', // NEVER DO THIS});// GOOD: Store references, not valuesawait rekall.memories.create({type: 'episodic',content: 'Updated database credentials stored in AWS Secrets Manager under key "prod/db/primary"',metadata: {tags: ['credentials', 'infrastructure'],secretRef: 'aws:secretsmanager:prod/db/primary',},});// GOOD: Use redaction for logs that might contain PIIawait rekall.memories.create({type: 'episodic',content: 'User contacted support about billing issue for account [REDACTED]',metadata: {tags: ['support', 'billing'],// Store PII references separately if neededaccountRef: 'encrypted:account_id_hash',},});// GOOD: Configure auto-redaction patternsawait rekall.config.update({redaction: {enabled: true,patterns: [{ type: 'email', action: 'hash' },{ type: 'credit-card', action: 'redact' },{ type: 'api-key', action: 'redact' },{ type: 'ssn', action: 'redact' },],},});
Data Organization
Structure your memory data for maximum findability and usefulness.
// 1. Use consistent metadata schemas per memory typeinterface EpisodicMetadata {source: 'conversation' | 'debugging-session' | 'code-review' | 'research' | 'meeting';project?: string;tags: string[];participants?: string[];}interface SemanticEntityProps {type: 'person' | 'project' | 'service' | 'technology' | 'concept' | 'decision';team?: string;status?: 'active' | 'deprecated' | 'planned';}// 2. Group related memories with session IDsconst sessionId = `session_${Date.now()}`;await rekall.memories.create({type: 'episodic',content: 'Started debugging the auth timeout issue',context: { sessionId },});await rekall.memories.create({type: 'episodic',content: 'Found the root cause: Redis connection timeout set too low',context: { sessionId },});// 3. Use importance scores meaningfully// 0.9-1.0: Critical decisions, production incidents, security issues// 0.7-0.9: Key learnings, significant bug fixes, architectural changes// 0.4-0.7: Normal work activities, routine decisions// 0.0-0.4: Trivial notes, temporary context (auto-decays)// 4. Link memories to entities when possibleconst memory = await rekall.memories.create({type: 'episodic',content: 'Migrated auth-service from session cookies to JWT tokens',});// Link to the relevant entityawait rekall.semantic.createRelationship({sourceId: memory.id,targetId: 'ent_auth_service',type: 'relates-to',});
Naming Conventions
| Resource | Convention | Examples |
|---|---|---|
| Entity names | Human-readable, title case | Payments Service, Alice Chen |
| Entity types | Lowercase, singular | person, service, technology |
| Relationship types | Lowercase kebab-case, verb form | owns, depends-on, created-by |
| Tags | Lowercase kebab-case | bug-fix, code-review, auth |
| Workflow names | Title case, descriptive | Deploy to Production, PR Review |
| Hive names | Title case, team/purpose | Platform Team, Project Alpha |
| Agent husks | PascalCase, role-based | CodeReviewer, DeployBot |
Scaling Patterns
High-Volume Memory Creation
// 1. Use batch operations for bulk ingestionconst BATCH_SIZE = 100;for (let i = 0; i < largeDataset.length; i += BATCH_SIZE) {const batch = largeDataset.slice(i, i + BATCH_SIZE);await rekall.memories.createMany(batch);}// 2. Use async/concurrent operations for independent tasksconst results = await Promise.allSettled([rekall.memories.search({ query: 'auth', limit: 5 }),rekall.memories.search({ query: 'payments', limit: 5 }),rekall.semantic.searchEntities({ query: 'services', limit: 10 }),]);// 3. Implement rate limit handlingimport { RekallClient } from '@rekall/client';const client = new RekallClient({apiKey: process.env.REKALL_API_KEY!,rateLimiting: {enabled: true,strategy: 'adaptive', // Automatically adjusts to rate limitsmaxRetries: 3,backoffMultiplier: 2,},});// 4. Use webhooks instead of pollingawait rekall.webhooks.create({url: 'https://your-app.com/webhooks/rekall',events: ['memory.created', 'workflow.completed', 'preference.detected'],secret: 'whsec_your_secret',});
Large Teams
// 1. Use multiple hives for different teams/projectsconst platformHive = await rekall.hives.create({ name: 'Platform Team' });const productHive = await rekall.hives.create({ name: 'Product Team' });const infraHive = await rekall.hives.create({ name: 'Infrastructure' });// 2. Set up shared conventions via hive preferencesawait rekall.preferences.setMany([{category: 'conventions',key: 'tagging-schema',value: 'activity:project:tech:priority',description: 'Tag format: activity type, project name, technology, priority',},{category: 'conventions',key: 'importance-scale',value: 'critical:0.9+, high:0.7-0.9, medium:0.4-0.7, low:0.0-0.4',description: 'Standardized importance score ranges',},], { context: { hiveId: platformHive.id } });// 3. Use agent hierarchies for complex workflows// Main orchestrator agentconst orchestrator = await rekall.agents.createHusk({name: 'TeamOrchestrator',config: { maxConcurrentTasks: 20 },});// Specialized sub-agentsconst reviewAgent = await rekall.agents.createHusk({name: 'CodeReviewer',config: { memoryAccess: { hive: 'read' } },});const deployAgent = await rekall.agents.createHusk({name: 'DeployBot',config: { memoryAccess: { hive: 'read-write' } },});// 4. Monitor memory growthconst stats = await rekall.admin.getStats();console.log(`Total memories: ${stats.totalMemories}`);console.log(`Storage used: ${stats.storageUsedMB}MB`);console.log(`API calls (24h): ${stats.apiCalls24h}`);console.log(`Active agents: ${stats.activeAgents}`);
Start simple, scale gradually
You do not need all of these patterns from day one. Start with personal memory and episodic events. Add semantic entities when you need structured knowledge. Introduce hives when you have a team. Add agents and workflows as automation needs grow.
