Forked from tupik/summary-chat
README.md
TOTAL RECALL Generates a summary from old chat text into an annotation at the beginning of a new chat.
To run the generator, select it from the list of models (turn on), then enter the command summa: in the chat + part of the old chat name from the list on the left panel.
The list of commands can be expanded in the config, using a separator(;). All words are synonyms for a single command: is a prefix for the name of the chat. Any language is supported. It's up to you.
Example 1: $$$old fairy tales
Example 2: summa:old fairy files
Example 3: ---all hairy falls
available at https://lmstudio.ai/tupik
Total RecallThere's no automatic model loading yet, even though the field exists in the config.
The generator uses any LLM model that is already loaded.
PS: for Windows only yet, path to .lmstudio/conversations/ hardcoded
Who knows how it is in the Linux/MacOS you could fork and fix it.
or I'll fix it later.
Since the chat name (in LM Studio) is generated automatically, the new chat will have the same name as the previous one, since the chat starts with the same words. You'll have to rename it manually. For your convenience. Sometimes it shows token:0. This isn't an error; this value is stored within the chat. The chat files aren't perfect. Ideally, the size of the annotation should not be fixed, but rather depend on the size of the source text. (7-10%-25) Summarizing System prompt should be in file.md - I know it.
lms get tupik/summary-chat
β Error βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β No loaded model satisfies all requirements specified in the query. β
β Loaded Models: β
β You don't have any models loaded. β
β Your query: β
β β’ The model must be an LLM β