Memory Intelligence
Six additive proxy modules that layer intelligence on top of the core memory engine — without touching obfuscated core files. Each module wraps memRef.current as a transparent Proxy, fails open on every error, and can be disabled independently via env vars.
boot() for correct behaviour:recall-tune → contradict → confidence → dedup → selforg → rl-memory
Confidence Scoring
Tracks how reliable each memory is on a 0.0–1.0 scale. Confidence initialises at 1.0 on first store, decays when contradictions are detected, and boosts when the same fact is reinforced across sessions. Exposed on every recall() result.
Wraps remember() and recall(). Reads/writes a confidence column via the underlying DB handle. Falls back to in-process map if DB handle is unavailable.
Wire-in
try{const _cf=require2('./vektor-confidence');memRef.current=_cf.wrapMemory(memRef.current);} catch(_){}
Confidence Rules
| Event | Delta | Description |
|---|---|---|
| New store | init 1.0 | Brand-new memory initialised at full confidence |
| Reinforcement | +0.05 | High-similarity write (≥0.85 score), no contradiction |
| In-place update | +0.025 | Mild boost on contradiction-resolved overwrite |
| Superseded | −0.10 | Old memory flagged as superseded by vektor-contradict |
| Hard replace | −0.20 | Memory deleted and replaced by contradiction |
CLI
node vektor-confidence.js list # all memories sorted by confidence asc node vektor-confidence.js get <id> # confidence for one memory node vektor-confidence.js set <id> 0.75 # set confidence manually node vektor-confidence.js decay <id> # apply decay (default −0.20) node vektor-confidence.js boost <id> # apply boost (default +0.05)
Dedup on Write
Before every remember() call, recalls the top-5 nearest existing memories. If cosine similarity meets the threshold, merges into the existing record instead of inserting a duplicate. Keeps the graph clean automatically.
Wire-in
try{const _dd=require2('./vektor-dedup');memRef.current=_dd.wrapMemory(memRef.current);} catch(_){}
Environment Variables
| Variable | Default | Description |
|---|---|---|
| VEKTOR_DEDUP_THRESHOLD | 0.95 | Cosine similarity cutoff for merge |
| VEKTOR_DEDUP_RECALL_LIMIT | 5 | Number of candidates to check per write |
| VEKTOR_DEDUP_UPDATE_IMPORTANCE | true | Inherit max importance when merging |
CLI
node vektor-dedup.js stats # insert/update/error counts node vektor-dedup.js test "memory content" # dry-run duplicate check node vektor-dedup.js config # show active config
Recall Tuning
Exposes min_score, max_results, and recency boost parameters for recall without touching the obfuscated core. Applied as the outermost proxy — wraps before all others.
Wire-in (must be FIRST proxy)
try{const _rt=require2('./vektor-recall-tune');memRef.current=_rt.wrapMemory(memRef.current,{push});} catch(_){}
Environment Variables
| Variable | Default | Description |
|---|---|---|
| VEKTOR_RECALL_MIN_SCORE | 0.0 | Filter results below this cosine score |
| VEKTOR_RECALL_MAX_RESULTS | 20 | Hard cap on results returned |
| VEKTOR_RECALL_BOOST_RECENT | true | Apply recency boost to scores |
| VEKTOR_RECALL_BOOST_HALFLIFE | 30 | Recency halflife in days |
| VEKTOR_RECALL_BOOST_WEIGHT | 0.15 | Max boost contribution to score |
| VEKTOR_RECALL_DEFAULT_LIMIT | 5 | Default limit when not specified |
Runtime Tuning API
memory.setRecallTune({ minScore: 0.35, boostRecent: false }); memory.getRecallTune(); // → current config object memory.resetRecallTune(); // restore defaults
CLI
node vektor-recall-tune.js show node vektor-recall-tune.js set min_score 0.35 node vektor-recall-tune.js set max_results 10 node vektor-recall-tune.js set boost_recent false node vektor-recall-tune.js set boost_halflife 14 node vektor-recall-tune.js reset node vektor-recall-tune.js test "query" --db ./slipstream-memory.db
RL-Based Prioritisation
Logs which recalled memories were actually used in agent responses. Trains a lightweight logistic regression scorer over 5 features — importance, recency, frequency, confidence, bias. Gradually replaces static importance with learned importance.
Wire-in
try{const _rl=require2('./vektor-rl-memory');memRef.current=_rl.wrapMemory(memRef.current,{push});} catch(_){}
memory.markSessionUsed(recalledIds) — passing the IDs of memories that were injected into context. Without this, the scorer never trains.
Environment Variables
| Variable | Default | Description |
|---|---|---|
| VEKTOR_RL_TRAIN_EVERY | 50 | Train after N new usage log entries |
| VEKTOR_RL_LEARN_RATE | 0.05 | SGD learning rate |
| VEKTOR_RL_BLEND_RATIO | 0.35 | How much learned score influences final importance |
| VEKTOR_RL_MIN_SAMPLES | 10 | Min log entries before scorer activates |
| VEKTOR_RL_WINDOW_DAYS | 30 | Rolling usage window in days |
Runtime API
// After agent responds — pass IDs of memories that were used memory.markSessionUsed(recalledIds); // Force a training pass memory.rlTrain(); // Scorer state memory.rlStats(); // → { model, logCount }
CLI
node vektor-rl-memory.js stats # scorer state + top used memories node vektor-rl-memory.js log <id> # manually mark a memory as used node vektor-rl-memory.js train # force training pass node vektor-rl-memory.js reset # wipe usage log (memories intact)
Briefing Scheduler
Fires a memory briefing automatically 8 seconds after boot, then on a configurable interval. Synthesises top recalled memories into 3–6 bullet points via LLM and pushes them into the chat stream. Optionally injects the briefing into the active session's system prompt.
Wire-in
try{ const _bs=require2('./vektor-briefing-scheduler'); _bs.startBriefingScheduler(memRef,{ provider: ()=>providerRef.current, model: ()=>modelRef.current, push, getSession: ()=>sessionRef.current, patchSession: (patch)=>Object.assign(sessionRef.current, patch) }); }catch(_){}
Environment Variables
| Variable | Default | Description |
|---|---|---|
| VEKTOR_BRIEFING_BOOT_DELAY_MS | 8000 | Delay after boot before first briefing (ms) |
| VEKTOR_BRIEFING_INTERVAL_MS | 3600000 | Interval between briefings (ms). Default: 1hr |
| VEKTOR_BRIEFING_RECALL_LIMIT | 20 | Memories to include in briefing input |
| VEKTOR_BRIEFING_INJECT_SYSTEM | true | Inject briefing into session system prompt |
| VEKTOR_BRIEFING_MAX_SYSTEM_CHARS | 1200 | Max chars to inject into system prompt |
| VEKTOR_BRIEFING_ENABLED | true | Set to false to disable entirely |
CLI (one-shot)
node vektor-briefing-scheduler.js --db ./slipstream-memory.db
Self-Organising Memory
Every remember() triggers a non-blocking background agent that extracts keywords, links the new memory to related nodes, classifies relationships (SUPPORTS, EXTENDS, CONTRASTS, RELATED, PREREQUISITE), and synthesises a Zettelkasten context note when enough links are found. remember() itself returns immediately — zero added latency.
Wire-in
try{const _so=require2('./vektor-selforg');memRef.current=_so.wrapMemory(memRef.current,{ provider: ()=>providerRef.current, model: ()=>modelRef.current, callLLM, push });} catch(_){}
What runs per remember() call (non-blocking)
| Phase | Description |
|---|---|
| Keywords | LLM extracts up to 6 keywords → saved to memories.keywords |
| Link candidates | Recalls top-8 related memories above score 0.45 |
| Relationship classification | LLM labels each pair: SUPPORTS | EXTENDS | CONTRASTS | RELATED | PREREQUISITE with 0–1 strength → written to memory_links table |
| Context note | If ≥2 links found and importance ≥0.6, LLM synthesises a permanent note → saved to memories.context_note |
memory_links table, keywords, and context_note columns. No manual migration step needed.