A structured idea management and validation plugin that runs inside LM Studio. Capture pain points, develop ideas, map assumptions, design lean validation experiments, define MVPs, and generate landing page copy — all from a single chat session, with everything persisted locally.
cd ideas-plugin npm install npm run build # compile TypeScript → JS lms push # push to LM Studio
For live development (auto-recompile on save):
npm run dev
Open Settings → Plugins → ideas in LM Studio to configure:
| Setting | Default | Description |
|---|---|---|
| Data Path | ~/ideas-data | Directory where ideas.json is stored |
| Max Search Results | 8 | Results returned per web search (3–20) |
An idea is a potential solution, product, feature, or business. Each idea is scored across four dimensions:
| Dimension | Scale | Description |
|---|---|---|
| Impact | 1–10 | Value delivered if the idea works |
| Feasibility | 1–10 | How easy it is to build or execute |
| Novelty | 1–10 | How differentiated vs. existing solutions |
| Effort | 1–10 | Resources required (1 = hours, 10 = years) |
A Priority Score is auto-calculated as (impact + feasibility + novelty − effort) / 3. Ideas are sorted by this score by default.
A pain point is a problem, frustration, or inefficiency you observe. Each pain point is tagged with:
Experiments test specific assumptions. Each tracks a hypothesis, method, success criteria, result (validated / invalidated / inconclusive), and evidence. Experiment results automatically update the idea's validationStatus.
capture_ideaSave a new idea with title, description, category, tags, and scores. Returns the idea with its ID and computed priority score.
list_ideasList all ideas with filtering and sorting. Filters: status, category, tag, keyword search. Sort by: priority, impact, feasibility, novelty, effort, or createdAt.
get_ideaGet full details of a single idea including linked pain points.
update_ideaUpdate any fields on an idea — including validationStatus and mvpDefinition to manually override what experiments set automatically.
delete_ideaPermanently delete an idea by ID.
capture_pain_pointLog a pain point with description, context, who is affected, frequency, and impact severity.
list_pain_pointsList pain points sorted by impact (critical first). Filter by status, category, impact, tag, or keyword.
get_pain_pointGet full details of a pain point including linked solution ideas.
update_pain_pointUpdate any fields including problemStatement (to store a generated statement) and status.
generate_problem_statementGenerate a structured problem statement from a pain point using the Who / What / When-Where / Why it matters / Current workarounds / Success criterion framework.
Loads from a pain point ID or accepts inline text. After generating, call update_pain_point with problemStatement to save it.
evaluate_ideaScore an idea across six dimensions with rationale:
Also returns top 3 risks, top 3 assumptions to validate first, and a go / validate more / abandon recommendation.
link_pain_to_ideaCreate a bidirectional link between a pain point and a solution idea. Lets you map which ideas address which problems and vice versa.
generate_solution_briefGenerate a concise product brief (mini-spec) from a pain point + idea pair. Sections: Problem, Proposed Solution, Target Users, Core Features (MVP), Success Metrics, What We're NOT Building, Open Questions.
brainstorm_solutionsGenerate N diverse solution approaches (default 6) across different angles — SaaS, automation, platform, community, hardware, process. Includes at least one unconventional or contrarian idea. Each approach includes effort estimate and pros/cons.
search_similar_problemsSearch the web for existing solutions, competitors, research, and discussions related to a pain point or idea. Focus modes: competitors, solutions, research, discussions, or general.
search_market_sizeSearch for TAM/SAM/SOM data, growth rates, and industry reports for an idea or market. Queries Statista, Grand View Research, TechCrunch, and general sources. Also returns a first-principles TAM estimation framework:
The validation flow follows a Lean Startup loop: map assumptions → design experiments → run them → log results → build scorecard → decide.
map_assumptionsSurface every assumption that must be true for an idea to succeed, across four risk categories:
| Category | Question |
|---|---|
| Desirability | Do people actually want this? |
| Feasibility | Can you build it? |
| Viability | Can you make money from it? |
| Usability | Can people use it without help? |
Returns assumptions ranked by risk × inverse_confidence — the most dangerous, least-validated assumptions first.
design_experimentDesign a ready-to-run lean experiment to test a specific assumption. Supports 10 experiment types:
| Type | Best for |
|---|---|
customer_interview | Validating the problem is real and painful |
landing_page | Measuring demand before building anything |
fake_door | Testing feature interest without building |
smoke_test | Cold audience interest via email/social |
concierge_mvp | Validating willingness to pay manually |
wizard_of_oz | Testing full flow without automation |
survey | Quantitative validation at scale |
prototype | Usability testing with a Figma/Framer mockup |
a_b_test | Comparing two messages, offers, or CTAs |
cold_outreach | B2B demand measurement |
Each experiment is saved to the database linked to the idea. The idea's validationStatus is automatically set to in_progress.
Returns: hypothesis template, step-by-step method, specific success criteria (with numbers), and what to do if the experiment passes or fails.
log_experiment_resultRecord the outcome of a completed experiment with raw evidence and learnings. Updates the experiment record and recalculates the idea's validationStatus:
validated if ≥2 experiments passed and passing > failinginvalidated if failing ≥ passing (tie = not proven)in_progress otherwisegenerate_validation_questionsGenerate customer discovery questions following Mom Test principles — ask about past behaviour, not future intent.
Sections: warm-up, problem exploration, current behaviour, solution reaction, economics/willingness to pay. Includes follow-up probes for the most important questions.
Format: interview (open-ended, conversational) or survey (structured, multiple choice friendly).
Mom Test rule: "Have you ever..." not "Would you...". Specifics, not generalities. Never lead the witness.
build_validation_scorecardAggregate all experiments for an idea into a scorecard with:
define_mvpDefine the smallest testable version of an idea. Supports 6 MVP types:
| Type | When to use |
|---|---|
concierge | Manually deliver the service to first users — no code |
wizard_of_oz | Build the UI, do the backend manually |
landing_page | Measure demand with email capture before building |
single_feature | Build only the one feature that delivers core value |
prototype | Clickable Figma/Framer mockup — zero code |
full_build | Simple real product — happy path only |
Produces: core value hypothesis, must-have features (3–5 max), explicit cut list, build plan, success and failure metrics, and a launch checklist. Saves the MVP type to the idea record.
Key principle: every feature you add is a hypothesis that needs its own validation.
generate_landing_page_copyGenerate complete conversion-optimised landing page copy to test demand before building:
CTA modes: email_signup, waitlist, book_demo, early_access, buy_now.
Tone options: professional, conversational, bold, empathetic.
export_reportExport a full Markdown report to disk. Includes:
Option to include or exclude archived/rejected items.
All data is stored locally in a single JSON file:
MIT
Parameters:
title, description, category, tags
impactScore (1–10), feasibilityScore (1–10), noveltyScore (1–10), effortScore (1–10)
notes
Parameters:
title, description, context, affectedUsers
frequency — rare | occasional | frequent | constant
impact — low | medium | high | critical
category, tags, notes
~/ideas-data/
ideas.json ← ideas, pain points, and experiments
ideas-report-YYYY-MM-DD.md ← exported reports (optional)
id, title, description, category, tags, status
impactScore, feasibilityScore, noveltyScore, effortScore
linkedPainPointIds, assumptions[], mvpDefinition
validationStatus ← not_started | in_progress | validated | invalidated
notes, createdAt, updatedAt
id, title, description, context, affectedUsers
frequency, impact, category, tags, status
problemStatement, linkedIdeaIds[]
notes, createdAt, updatedAt
id, ideaId, title, type, hypothesis, method
successCriteria, effort, cost
result ← pending | validated | invalidated | inconclusive
evidence, learnings, testedAssumptionIds[]
createdAt, updatedAt
# 1. Log what you observed
capture_pain_point(
title="Code review is fragmented across 4 tools",
affectedUsers="Senior developers on 5+ person teams",
frequency="frequent", impact="high"
)
# 2. Write the problem statement
generate_problem_statement(painPointId="...")
# 3. Capture a solution idea
capture_idea(
title="Unified code review sidebar",
description="VS Code extension that aggregates PR comments, CI status, and docs",
impactScore=8, feasibilityScore=7, noveltyScore=6, effortScore=5
)
# 4. Link them
link_pain_to_idea(painPointId="...", ideaId="...")
# 5. Check the market
search_market_size("developer tools code review")
# 6. Map what must be true
map_assumptions(ideaId="...")
# 7. Design the cheapest test
design_experiment(
ideaId="...",
assumption="Developers will pay $15/month for this",
experimentType="landing_page",
budget="$0",
timeAvailable="days"
)
# 8. Run it, log results
log_experiment_result(
experimentId="...",
result="validated",
evidence="62 sign-ups from 800 visitors (7.75% CVR). 12 replied to follow-up email."
)
# 9. Define the MVP
define_mvp(ideaId="...", mvpType="single_feature", targetUser="Senior devs at startups")
# 10. Generate the launch page
generate_landing_page_copy(
ideaId="...",
targetUser="senior developers",
coreBenefit="All your code review context in one place",
ctaGoal="waitlist"
)
# 11. Build the scorecard
build_validation_scorecard(ideaId="...")
# 12. Export everything
export_report()
not_started
↓ (first experiment designed)
in_progress
↓
validated ← ≥2 experiments pass, passing > failing
invalidated ← failing ≥ passing (tie counts as not proven)