A research-grade web search plugin that goes beyond snippets — it reads pages, verifies claims, detects contradictions, finds primary sources, and explains its reasoning.
Standard search gives you ten links and short snippets. You get:
This plugin fixes all of that.
Load the built plugin in LM Studio.
| Field | Default | Description |
|---|---|---|
| Max Search Results | 8 | Results retrieved per query |
| Max Pages to Read | 3 | Pages actually fetched and read per search |
| Page Fetch Timeout | 8000ms | Per-page timeout before giving up |
| Search Language | en-us | Language/region for results |
search — Core search with page readingThe main tool. Unlike basic search, it fetches and reads the actual page content — not just snippets.
Returns:
fetch_and_read — Read a specific URLFetch any URL and return the full readable text content. This is the capability regular search completely lacks.
Use when:
deep_search — Multi-angle researchRuns 3–5 separate searches from different perspectives on the same topic, reads pages for each, and returns everything together. Defeats single-search bias.
Default angles: overview facts, latest research, criticism/limitations, expert consensus.
You can specify your own angles, e.g.:
fact_check — Verify a specific claimCross-checks a claim across four search angles: direct confirmation, debunking searches, evidence searches, and expert opinion. Returns raw evidence from all angles for the LLM to assess.
Verdict categories: supported, disputed, unsupported, nuanced, uncertain.
verify_statistic — Verify a number or percentageStatistics are frequently outdated, misquoted, out of context, or fabricated. This tool searches for the stat, its primary source, fact-check results, and updated data.
Example: verify_statistic("90% of startups fail in year one", "venture-backed US tech startups")
find_primary_source — Trace a claim to its originSecondary sources often distort original findings. This tool searches for the original study, report, official document, or statement where a claim first appeared.
Prioritises: peer-reviewed journals, government reports, official organisation publications over secondary citations.
search_recent — Time-filtered searchOnly returns results from the specified time window. Prevents stale results from dominating on fast-moving topics.
Windows: day (last 24h), week, month, year.
compare_sources — Surface agreements and conflictsFetches multiple sources on the same topic and returns them side by side for the LLM to compare framing, spot conflicts, and identify unique claims.
Provide specific URLs to compare, or let it search and pick sources with varied domains automatically.
Returns structured analysis of:
find_expert_views — Expert consensus and dissentSearches specifically for academic research, official positions, expert interviews, and scientific consensus — not what random blogs claim experts say.
Covers four angles: expert consensus, peer-reviewed research, official institutional positions, and active scientific debate.
search_academic — Academic papers onlySearches arXiv, PubMed, and Semantic Scholar for peer-reviewed papers and research publications.
Sources: arxiv, pubmed, semantic_scholar, all.
Fetches paper pages to extract abstracts, methodology, and findings. The LLM is instructed to distinguish preprints from peer-reviewed work, note sample sizes, and not overstate findings.
search_news — News-specific searchNews-filtered search that actively ranks established journalism above blogs, product pages, and content farms. Runs two queries — one general, one targeting major news outlets — then ranks high-credibility results first.
Windows: day, week, month, any.
Unlike search_recent (which filters by date), this filters by source type — it's about journalistic sourcing, not just recency. Best for: breaking news, corporate announcements, policy changes, anything where "who is reporting it" matters.
Returns: ranked results with credibility labels, read articles, and instruction to distinguish confirmed facts from allegations and single-source claims.
research_topic — Full multi-step research briefRuns multiple searches from different angles, reads key pages, and instructs the LLM to produce a structured research brief: overview, established facts, contested areas, expert consensus, open questions, key sources, and confidence assessment.
Depths:
overview — 3 angles, 2 pages eachdetailed — 5 angles, 2 pages each (default)comprehensive — 7 angles, 3 pages eachcheck_source — Source credibility assessmentAssesses a URL or domain and returns its credibility type, known signals, reputation search results, and red flags to watch for.
Domain types: government, academic institution, academic/research platform, established news outlet, encyclopedia, user-generated content, unknown.
Credibility levels: high, medium, low, unknown.
Red flags checked:
Every search result and fetched page gets a credibility assessment based on domain signals:
| Domain Type | Credibility | Examples |
|---|---|---|
| Government | HIGH | .gov, .mil, WHO, CDC |
| Academic institution | HIGH | .edu, .ac.uk, universities |
| Academic platforms | HIGH | arXiv, PubMed, Semantic Scholar |
| Established news | HIGH | Reuters, AP, BBC, Nature, NYT |
| Wikipedia | MEDIUM | Good overview, verify citations |
| User-generated / blogs | LOW | Blogspot, WordPress, Reddit, Quora |
| Unknown | UNKNOWN | Check About page and author credentials |
The plugin's system prompt instructs the LLM to:
Simple fact:
"What is the Dunning-Kruger effect?" →
search
Verify a claim:
"Is it true that we only use 10% of our brains?" →
fact_check
Verify a statistic:
"Someone told me 50,000 species go extinct every year. Is that right?" →
verify_statistic
Recent developments:
"What's happened with GPT-5 in the last week?" →
search_recent(window: "week")
Compare perspectives:
"What do different sources say about seed oils and health?" →
compare_sourcesordeep_search
Scientific consensus:
"What does the research actually say about intermittent fasting?" →
find_expert_views+search_academic
Deep research:
"Give me a thorough research brief on quantum error correction" →
research_topic(depth: "comprehensive")
Read a specific article:
"Can you read this paper and summarise the key findings? [url]" →
fetch_and_read
Check if a source is reliable:
"Is naturalhealth365.com a reliable source?" →
check_source
cd web-search-plugin
npm install
npx tsc
search(query, max_pages_to_read?)
fetch_and_read(url, max_chars?)
deep_search(topic, angles?, pages_per_angle?)
angles: ["economic impact", "environmental cost", "industry response", "regulatory landscape"]
fact_check(claim)
verify_statistic(statistic, context?)
find_primary_source(claim, domain?)
search_recent(query, window?, read_pages?)
compare_sources(topic, urls?, num_sources?)
find_expert_views(topic, field?)
search_academic(topic, source?, year_from?)
search_news(query, window?, read_pages?)
research_topic(topic, depth?, focus?)
check_source(url)