Forked from khtsly/persistent-memory
Project Files
README
Give your local LLM a long-term brain. This plugin stores facts, preferences, projects, and notes across conversations in a local SQLite database — zero cloud, zero external services, fully portable.
Memories are ranked by a composite score blending four signals (inspired by the SRLM paper which showed that combining multiple uncertainty signals outperforms any single one):
| Signal | Weight | What it measures |
|---|---|---|
| TF-IDF Similarity | 55% | Semantic relevance to the current query |
| Recency Decay | 20% | Exponential decay based on last access time |
| Confidence | 15% | How certain we are about this fact (0–1) |
| Access Frequency | 10% | How often this memory has been surfaced |
Decay follows score = 2^(-days/halfLife) — memories that stop being accessed gradually fade, just like human memory.
| Scope | Persistence | Use for |
|---|---|---|
global (default) | Forever, all chats | User facts, preferences, standing instructions |
project | Forever, project-filtered | Project-specific context, repo details, team info |
session | Until LM Studio closes | Temporary context, scratch notes, current-task state |
Global memories are injected into every conversation automatically. Project memories are persisted to SQLite but only surface when that project is referenced. Session memories live entirely in memory — they're never written to disk and vanish when the plugin reloads. This lets you store throwaway context without polluting your permanent knowledge base.
| Category | Use for |
|---|---|
fact | Things about the user: name, job, skills |
preference | Likes, dislikes, coding style, UI choices |
project | Current work, repos, goals |
note | Free-form memos |
instruction | Standing instructions ("always use TypeScript") |
relationship | People, teams, organizations |
context | Situational context ("interviewing at X") |
When Active Project is set, durable repo knowledge should be written into that project's namespace instead of global memory.
global: user-wide preferences and standing instructionsproject: repo-specific knowledge, architecture decisions, workflow rules, team conventions, recurring issuessession: temporary task state and scratch contextSetting Active Project and enabling AI extraction is only sufficient if the write path also stores extracted memories with scope: "project" and the configured project slug. This plugin now applies that rule to auto-extracted chat memories.
Use the bootstrap flow to store distilled project memories, not raw file dumps. Good bootstrap entries include:
Run a project ingest when:
All data stays on your machine. The memory database is a single file at ~/.lmstudio/plugin-data/persistent-memory/memory.db. You can:
MIT