README.md
TOTAL RECALL Generates a summary for old chat text saved at left panel into the annotation at the beginning of new chat.
To run the generator, select it from the list of models (turn it on at the drop-down top-list ) ;
then enter the command summa: below in the field of chat & part of the old chat name from the list on the left panel.
In the config, the list of commands could be expanded using a separator(;).
All words are synonyms for a single command: is a prefix for the name of the chat. Any languages are supported. It's up to you.
Other words are not processed -- just print short instruction right in the chat. Examples:
: $$$old fairy tales, summa:old fairy files, ---all hairy balls, $$$script, recall:project #67, summa:n8n, εθ¨ 999, ---Π‘ΡΠ΅Π½Π°
Where did it come from? Available at https://lmstudio.ai/tupik
After installation the plugin doesn't appear in the right panel. It disappears, which is expected. This is how all generators work. (a type of plugin) Only after launching from the central drop-down menu it does the configuration panel appear on the right.
Total Recall -- How it works:summa: command. You can enter your custom command in the config.There's no automatic model loading yet, even though the field Model-name exists in the config.
Plugin-generator uses any LLM model that is already loaded. The quality of the summary may vary.
Best models for this - with Instruct/it capability (they follow the prompt better).
The developers at Alibaba (the creators of Qwen) specifically trained the model to recognize the strings /think and /no_think. When the tokenizer sees this trigger, it instantly replaces the model's internal state, generating the closing tag, telling the AI-network: "reasoning is over, go straight to the final answer." Where else does this work? Only in the Qwen3 family (and its derivatives).
The plugin is still under development, especially the outcome depends on the model and prompt.
Since the chat name (in LM Studio) is generated automatically, the new chat will have the same name as the previous one,
with suffix entered; since the chat starts with the same words. You'll have to rename it manually - for your convenience.
Sometimes output shows Tokens:0. This isn't an error; this value is stored within the chat file.
The program itself calculates tokens when it needs to. And we suddenly intervene in this flow.
The chat files flow aren't still works in the way we might expect.
Summarizing System prompt not in file.md yet; and not in config. Promt is still hardcoded in plugin.
lms get tupik/summary-chat
β Error βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β No loaded model satisfies all requirements specified in the query. β
β You don't have any models loaded. β’ The model must be an LLM β