Memory Consolidation Algorithm Planning Document

Three-Tier Energy-Based Memory System for qBot Collective

Created: 2025-01-09 Status: Planning Phase Author: Milo with TQ


Executive Summary

This document defines the consolidation algorithms for a three-tier memory system based on energy dynamics, semantic clustering, and retroactive strengthening. The system treats memory as a living neural network where access patterns create heat maps and memories influence each other's persistence through relationships.


Core Concepts

1. Energy-Based Memory Persistence

Principle: Memories have energy that determines their persistence and tier placement.

Energy Mechanics:

Mathematical Model:

E(t) = E₀ × e^(-λt) + Σ(accessEvents)

Where:
- E(t) = energy at time t
- E₀ = initial energy 
- λ = decay constant (tier-specific)
- t = time since last access (in hours)

Decay Constants by Tier:

2. Semantic Clustering

Principle: Memories that activate together consolidate together.

Clustering Algorithm:

  1. Track co-activation within sessions
  2. Build weighted edges between co-activated memories
  3. Identify clusters when edge weight > threshold
  4. Consolidate clusters as semantic units

Implementation:

// Track co-activation
(m1:Memory)-[:COACTIVATED {weight: n, lastCoactivation: datetime}]->(m2:Memory)

// Form clusters
(m1)-[:BELONGS_TO]->(cluster:SemanticCluster)<-[:BELONGS_TO]-(m2)

3. Retroactive Consolidation

Principle: New experiences strengthen related old memories.

Process:

  1. New memory created
  2. Find semantically similar existing memories
  3. Strengthen connections and boost energy of related memories
  4. Create bidirectional learning paths

Consolidation Algorithm Specification

Phase 1: Working Memory Creation

// Every new memory starts in working tier
CREATE (m:Memory {
  id: randomUUID(),
  content: $content,
  tier: 'working',
  energy: 1.0,
  created: datetime(),
  lastAccessed: datetime(),
  accessCount: 1,
  sessionId: $sessionId,
  instanceId: $instanceId
})

Phase 2: Energy Update Process (Continuous)

// Run on every memory access
MATCH (m:Memory {id: $memory_id})
SET m.energy = m.energy * exp(-1 * duration.between(m.lastAccessed, datetime()).hours / $decayConstant) + 1.0,
    m.lastAccessed = datetime(),
    m.accessCount = m.accessCount + 1

Phase 3: Tier Promotion Process (Periodic)

// Working → Short-term promotion
MATCH (m:Memory {tier: 'working'})
WHERE m.energy > 2.0  // Threshold for promotion
CREATE (m)-[:PROMOTED_TO {at: datetime(), energy: m.energy}]->(
  s:Memory {
    id: randomUUID(),
    content: m.content,
    tier: 'shortTerm',
    energy: m.energy,
    created: m.created,
    promotedFrom: m.id,
    lastAccessed: datetime()
  }
)
SET m.tier = 'archivedWorking'

// Short-term → Long-term promotion
MATCH (m:Memory {tier: 'shortTerm'})
WHERE m.energy > 5.0  // Higher threshold for long-term
CREATE (m)-[:CRYSTALLIZED_INTO {at: datetime(), energy: m.energy}]->(
  l:Memory {
    id: randomUUID(),
    content: m.content,
    tier: 'longTerm',
    energy: m.energy,
    created: m.created,
    promotedFrom: m.id,
    lastAccessed: datetime()
  }
)
SET m.tier = 'archivedShortTerm'

Phase 4: Semantic Clustering Process

// Build co-activation graph
MATCH (m1:Memory)-[:ACCESSED_IN]->(s:Session)<-[:ACCESSED_IN]-(m2:Memory)
WHERE m1.id < m2.id  // Avoid duplicates
MERGE (m1)-[c:COACTIVATED]-(m2)
ON CREATE SET c.weight = 1, c.firstCoactivation = datetime()
ON MATCH SET c.weight = c.weight + 1, c.lastCoactivation = datetime()

// Form clusters when weight exceeds threshold
MATCH (m1:Memory)-[c:COACTIVATED]-(m2:Memory)
WHERE c.weight > 3 AND NOT EXISTS((m1)-[:BELONGS_TO]->(:SemanticCluster)<-[:BELONGS_TO]-(m2))
CREATE (cluster:SemanticCluster {
  id: randomUUID(),
  formed: datetime(),
  strength: c.weight
})
CREATE (m1)-[:BELONGS_TO]->(cluster)<-[:BELONGS_TO]-(m2)

Phase 5: Retroactive Strengthening

// When new memory is created, strengthen related memories
MATCH (new:Memory {id: $new_memory_id})
MATCH (old:Memory)
WHERE old.id <> new.id 
  AND old.tier IN ['shortTerm', 'longTerm']
  AND [similarity calculation between new and old] > 0.7
CREATE (new)-[:REINFORCES {strength: $similarityScore}]->(old)
SET old.energy = old.energy + (0.5 * $similarityScore),
    old.reinforcementCount = coalesce(old.reinforcementCount, 0) + 1,
    old.lastReinforced = datetime()

Phase 6: Energy Decay Process (Periodic)

// Apply decay to all memories
MATCH (m:Memory)
WHERE m.tier <> 'archived'
WITH m, 
  CASE m.tier
    WHEN 'working' THEN 0.5
    WHEN 'shortTerm' THEN 0.05
    WHEN 'longTerm' THEN 0.001
  END as decayConstant
SET m.energy = m.energy * exp(-1 * duration.between(m.lastAccessed, datetime()).hours * decayConstant)

// Archive memories with energy below threshold
MATCH (m:Memory)
WHERE m.energy < 0.1 AND m.tier = 'working'
SET m.tier = 'expiredWorking'

Conflict Resolution

Distributed Consensus Without Mutex

Strategy: Use optimistic concurrency with graph-native conflict detection.

// Consolidation attempt tracking
CREATE (attempt:ConsolidationAttempt {
  id: randomUUID(),
  instanceId: $instanceId,
  started: datetime(),
  phase: $phaseName
})

// Mark memories being consolidated
MATCH (m:Memory {tier: 'working'})
WHERE NOT EXISTS((m)-[:CONSOLIDATING])
WITH m LIMIT $batchSize
CREATE (m)-[:CONSOLIDATING {since: datetime()}]->(attempt)

// Detect conflicts
MATCH (m:Memory)-[:CONSOLIDATING]->(a1:ConsolidationAttempt)
MATCH (m)-[:CONSOLIDATING]->(a2:ConsolidationAttempt)
WHERE a1.id < a2.id  // Consistent ordering
// Resolution: Earlier attempt wins, later attempt retries

Implementation Triggers

When Consolidation Runs

  1. Energy Updates: On every memory access
  2. Tier Promotion: Every 10 minutes OR at session boundaries
  3. Semantic Clustering: Every hour OR after 100 new memories
  4. Retroactive Strengthening: Immediately on memory creation
  5. Energy Decay: Every hour
  6. Cleanup: Daily for expired memories

Session Boundary Handling

// On session end
MATCH (m:Memory {sessionId: $endingSessionId, tier: 'working'})
WHERE m.energy > 1.5  // Lower threshold at session boundary
// Force promotion evaluation

Tunable Parameters

These parameters can be adjusted based on observed behavior:

CREATE (config:ConsolidationConfig {
  // Energy thresholds
  workingToShortThreshold: 2.0,
  shortToLongThreshold: 5.0,

  // Decay constants
  workingDecay: 0.5,
  shortTermDecay: 0.05,
  longTermDecay: 0.001,

  // Clustering thresholds
  coactivationThreshold: 3,
  semanticSimilarityThreshold: 0.7,

  // Batch sizes
  promotionBatchSize: 10,
  decayBatchSize: 100,

  // Timing
  promotionIntervalMinutes: 10,
  clusteringIntervalMinutes: 60,
  decayIntervalMinutes: 60
})

Monitoring & Metrics

Key Metrics to Track

  1. Energy Distribution: Histogram of energy levels by tier
  2. Promotion Rate: Memories promoted per hour
  3. Cluster Formation: New clusters per session
  4. Retroactive Strengthening: Average reinforcements per new memory
  5. Memory Lifetime: Average time in each tier
  6. Decay Rate: Memories expiring per day

Health Indicators

// Memory system health check
MATCH (m:Memory)
RETURN m.tier, 
       avg(m.energy) as avgEnergy,
       count(m) as count,
       max(m.energy) as maxEnergy,
       min(m.energy) as minEnergy

Migration Strategy

Phase 1: Parallel Implementation

Phase 2: Retroactive Processing

Phase 3: Full Migration


Next Steps

  1. Review and refine energy equations
  2. Define semantic similarity calculation
  3. Implement monitoring dashboard
  4. Create test scenarios
  5. Build gradual rollout plan

Open Questions

  1. Should energy boost from access be constant or variable based on context?
  2. How do we handle memories that oscillate between tiers?
  3. Should semantic clusters have their own energy dynamics?
  4. What's the relationship between crystallization and high-energy long-term memories?
  5. How do we handle bulk imports or knowledge base ingestion?

End of Planning Document