README.md
A high-performance LM Studio plugin for Python and C++ development, filesystem operations, git, web research, and autonomous Agent Zero capabilities.
| Tool | Description |
|---|---|
read_file | Read files with optional line range selection |
write_file | Write or append to files, creating parent dirs |
replace_in_file | Surgical find-and-replace (literal or regex) |
delete_path | Delete files or directories |
move_path | Move or rename files/directories |
copy_path | Copy files within the workspace |
list_directory | List files with metadata, filtering, recursive walk |
create_directory | Create directories (with parents) |
file_info | File metadata: size, line count, permissions |
search_in_files | grep-like search across workspace files |
| Tool | Description |
|---|---|
run_command | Execute arbitrary shell commands |
run_python_code | Run a Python snippet (temp file) |
run_python_file | Run an existing Python script |
get_environment | Report Python version, PATH, installed tools |
| Tool | Description |
|---|---|
analyze_python | AST analysis: imports, classes, functions, globals |
format_python | Format with ruff or black |
lint_python | Lint with ruff or pylint, optional auto-fix |
run_tests | Run pytest with filtering and verbosity control |
pip_install | Install Python packages |
pip_list | List installed packages (supports --outdated) |
type_check_python | Run mypy, optional --strict mode |
| Tool | Description |
|---|---|
compile_cpp | Compile with clang++/g++, sanitizers, custom flags |
run_binary | Execute a compiled binary |
compile_and_run | Compile and immediately run a snippet or file |
format_cpp | Format with clang-format |
analyze_cpp | Static analysis with clang-tidy |
cmake_build | CMake configure + build |
get_cpp_info | Compiler and tool version report |
| Tool | Description |
|---|---|
fetch_url | Fetch a URL and return plain text (HTML stripped) |
search_web | DuckDuckGo search, returns titles + snippets |
| Tool | Description |
|---|---|
git_status | Working-tree status |
git_diff | Working-tree or staged diff |
git_log | Commit history with filtering |
git_show | Show a specific commit |
git_branches | List local and remote branches |
git_stage | Stage files (git add) |
git_commit | Create a commit (optionally stages files first) |
git_init | Initialise a new repository |
| Tool | Description |
|---|---|
memory_save | Persist a fact or solution across conversations |
memory_recall | Keyword-search persistent memory |
memory_list | List all saved memories |
memory_delete | Delete a memory by id or key |
list_models | List models loaded in LM Studio |
spawn_agent | Delegate a task to an autonomous sub-agent |
create_tool | Create a reusable Python tool saved to the workspace |
call_tool | Call a previously created tool by name |
list_custom_tools | List all dynamically created tools |
delete_tool | Delete a dynamic tool |
cd ts-plugin npm install lms push
Or during development:
npm run dev
Open Settings → Plugins → high-perf-tools in LM Studio.
| Setting | Default | Description |
|---|---|---|
| Workspace Path | (cwd) | All file operations are sandboxed here |
| Python Interpreter | (system python3) | Conda env name (e.g. misc) or full binary path |
| Allow Shell Commands | off | Enables run_command |
| Allow Python Execution | off | Enables run_python_code, run_python_file, run_tests |
| Allow C++ Compilation | off | Enables compile_cpp, run_binary, compile_and_run, cmake_build |
| Allow pip install | off | Enables pip_install |
| Allow Git Write Operations | on | Enables git_stage, git_commit |
| Prefer Clang over GCC | on | Use clang++ when both compilers are available |
| Command Timeout (seconds) | 60 | Hard kill timeout for all subprocess execution |
| Sub-Agent Model ID | (reuse current) | Model for spawn_agent (blank = no extra RAM) |
| LM Studio API Endpoint | http://localhost:1234 | OpenAI-compatible API used by sub-agents |
| Sub-Agent Max Iterations | 6 | Tool-call loop limit per spawn_agent call |
Set Python Interpreter to:
python3ml) — the plugin searches common conda prefix directories (~/miniconda3/envs, ~/anaconda3/envs, etc.) and falls back to conda run -n <name> if not found/opt/homebrew/envs/ml/bin/python)All Python tools, pip, pytest, and dynamic tools share this single interpreter.
The plugin stores memories as JSON at .agent_memory/memory.json inside the workspace. Memories survive across conversations. The model can tag and query them with memory_save / memory_recall.
Sub-agents run an autonomous tool-call loop against LM Studio's OpenAI-compatible API.
RAM-aware model selection:
The sub-agent has access to: read_file, write_file, list_directory, run_python_code, run_command, memory_save, memory_recall.
Set allow_tools=false for pure generation tasks (faster, no tool-call loop).
The model can write Python functions and register them as persistent tools:
create_tool( name="add_numbers", description="Add two numbers", args_schema='{"x": number, "y": number}', python_code='return str(args["x"] + args["y"])' )
Tools are saved as Python scripts in .agent_tools/ inside the workspace and can be called by name in future conversations. Arguments are passed as JSON via stdin; the function returns a string or JSON-serialisable value.
Every tool is wrapped in safe_impl(). When a tool throws, the model receives a structured JSON response it can parse and retry:
{ "tool_error": true, "tool": "write_file", "error": "Path escapes the workspace", "hint": "Read the error above, fix the parameter causing the issue, and retry the tool call." }
The prompt preprocessor injects tool-call rules on the first turn, reminding the model to output valid JSON and handle tool_error responses.
../../etc/passwd) are rejected.fetch_url blocks requests to localhost, RFC 1918, and link-local addresses.run_command warning: shell execution is not sandboxed beyond setting the working directory. Only enable it if you trust the model.lms CLI (npm install -g @lmstudio/lms)