Pinned
A flexible RAG (Retrieval-Augmented Generation) plugin for LM Studio with dynamic embedding model selection and intelligent context management.
Projects
An LM Studio plugin that gives your local models the ability to search the web and read web pages.
Memory slots for AI to remember things for a long time and transfer facts between chats.
DuckDuckGo Search MCP – Supercharge your LM Studio workflows with real‑time, privacy‑first web search. Get fresh results from DuckDuckGo directly in your prompts, perfect for research, creative projects, and automation. Considered the best to dat...
This is to help iterate on prompts for Cursor or Windsurf
PRESET
1
Big RAG Plugin for LM Studio A powerful RAG (Retrieval-Augmented Generation) plugin for LM Studio that can index and search through gigabytes or even terabytes (not tested) of document data. Hosted here: github.com/ari99/lm_studio_big_rag_plugin
COT is a strategic analyst who uses advanced methods such as to, react, star, AMR. You can position it like a upper mind.
PRESET
1
Converts Markdown table to Excel file.
Reworded plugin with proxy functionality that provides LLMs with the capacity to "visit" websites by providing them with the links, image URLs and text content of any web page.
OpenAI, Claude, or any other OpenAI compat endpoint
PLUGIN
1
Give LLM the ability to run JavaScript/TypeScript code in a sandboxed environment using Deno.
PLUGIN
1
Podcast transcript analysis
Give LLMs filesystem access for a user specified directory
System prompt to show the model how to write tools with lmstudio-js
helps the user to create a working chrome extension
System prompt to show the model how to write tools with lmstudio-py
An MCP (Model Context Protocol) server that gives AI assistants full terminal access — execute commands, read/write/edit files, navigate the filesystem, manage directories, and inspect environment variables.
Template tool provider plugin with a single tool: create file
Qwen Coder Next is an 80B MoE with 3B active parameters designed for coding agents and local development. Excels at long-horizon reasoning, complex tool usage, and recovery from execution failures.
MODEL
1
MiniMax M2 is a 230B MoE (10 active) LLM, built for coding and agentic workflows.
MODEL
1
Second-generation Devstral for agentic coding with 123B parameters and a 256k context window. Built for tool use to explore codebases, edit multiple files, and power software engineering agents.
MODEL
1
NVIDIA Nemotron 3 Super, a 120B open hybrid MoE model (12B active), supporting up to 1M tokens context window
General purpose reasoning and chat model trained from scratch by NVIDIA. Contains 30B total parameters with only 3.5B active at a time for low-latency MoE inference
MODEL
1
Reworked DuckDuckGo plugin.
Give LLM tools to search and read Wikipedia articles (ja ver).
PLUGIN
1
Give LLM tools to search and read Wikipedia articles.