mcp-techtrend

Multi-source MCP server: arXiv + PubMed + GitHub + HuggingFace + openFDA 510(k)/Recalls. Newspaper-style briefings, per-domain tuning, sandbox-safe Python launcher.

trends-mcp

mcp-techTrend MCP server

한국어 문서: README.ko.md

A single MCP server that pulls academic + code + medical-device-regulatory trend data from seven sources and renders newspaper-style briefings — with per-domain tuning baked in.

SourceToolsNotes
arXivarxiv_recent, arxiv_searchPer-category round-robin so small categories aren't drowned by big ones
PubMedpubmed_searchFull abstracts via efetch.fcgi
HF Daily Paperspaperswithcode_trendingSorted by community upvotes (replaces sunset PwC API)
GitHubgithub_trending, github_searchTrending page scrape + Search API with created:> date filter
Hugging Facehuggingface_trendingModels / datasets / spaces, trending or recent
openFDA 510(k)fda_510k_recentDevice clearances
openFDA Recallsfda_recalls_recentRecall events with class filter
(aggregators)trends_digest, trends_briefingMulti-source parallel calls

trends_briefing is the headline tool: invoke "weekly news" / "주간 뉴스" and get a newspaper-formatted briefing across all enabled sources, automatically translated into the user's conversation language by the LLM.


Why this exists

Most academic / code / regulatory MCP servers are single-source. This one is multi-source and domain-aware: a researcher tracking medical-imaging AI, an ML engineer following ML papers, a security analyst watching CVEs and trending repos — all configure once via python configure.py, then trends_briefing becomes the "Monday morning newspaper" for their domain.

What makes it useful:

  • Newspaper format with translation hint — the LLM auto-translates source text (paper abstracts, recall reasons, etc.) to your conversation language while preserving identifiers, URLs, and metric values verbatim.
  • Per-category round-robin for arXiv — cs.HC (~50 papers/wk) doesn't get drowned by cs.LG (~1500/wk) when both are tracked together.
  • TTL cache + concurrent-request coalescing — repeat calls and parallel briefings don't hammer upstream APIs.
  • No required tokens. All seven sources work anonymously; tokens just raise the per-source rate limit ceiling.
  • Sandbox-safe Python launcher. Bypasses the claude_desktop_config.json env block (which truncates whitespace-containing values on some macOS builds) by setting environment variables in Python before handing off to the server.

Install

git clone https://github.com/salwks/mcp-techTrend.git
cd mcp-techTrend
python3 -m venv .venv
.venv/bin/pip install -r requirements.txt

Connect to Claude Desktop by editing ~/Library/Application Support/Claude/claude_desktop_config.json:

{
  "mcpServers": {
    "trends": {
      "command": "/path/to/trends-mcp/.venv/bin/python",
      "args": ["/path/to/trends-mcp/run.py"]
    }
  }
}

⚠️ args points at run.py (the launcher), not trends_mcp.py. The launcher sets domain-specific env vars before the server starts.

Restart Claude Desktop. The trends server should appear with the tools listed below (count depends on your enabled sources).


Configuration

One source of truth: run.py. Two ways to edit it:

A. Interactive TUI — configure.py (recommended)

python configure.py
═══ trends-mcp 설정 ═══
  [1] Active sources       (7/7 enabled)
  [2] arXiv categories     (4 entries · 13 papers/wk)
  [3] PubMed query
  [4] API tokens           (0/4 set)
  [5] Show current config
  [6] Save and restart
  [7] Quit without saving

Toggle sources with numbers, set arXiv weights with set 1 7, apply presets with preset medical-imaging, save with [6]. The save action backs up to run.py.bak, writes the new SETTINGS block (AST-based — never touches non-config code), and runs pkill -f trends_mcp so Claude Desktop respawns the server with the new config on next call.

The TUI menu labels are in Korean; commands and presets are in English. i18n of the TUI itself is on the v0.2 roadmap.

Single-shot modes:

python configure.py --show       # print current config
python configure.py --restart    # pkill stale MCP processes

B. Direct edit — run.py SETTINGS block

TRENDS_ENABLED_SOURCES = ""                          # "" = all
TRENDS_ARXIV_CATEGORIES = "cs.LG:5,cs.CV:3,cs.CL:3,cs.AI:2"
TRENDS_DEFAULT_PUBMED_QUERY = "(deep learning OR AI) AND (medical OR clinical)"
# GITHUB_TOKEN = "ghp_..."         # raises 60 → 5,000 req/h
# HF_TOKEN = "hf_..."
# NCBI_API_KEY = "..."             # raises 3 → 10 req/s for PubMed
# OPENFDA_API_KEY = "..."          # raises 240 → 120,000 req/day

Restart Claude Desktop after saving (or pkill -f trends_mcp).

C. From chat — trends_set_* tools

Same SETTINGS block, edited via MCP tool calls. The chat path and configure.py read & write the same run.py (single source of truth), so changes from either side are visible to the other.

Just say it in chat:

"트렌드에서 PubMed 쿼리를 cardiology 쪽으로 바꿔줘" "github와 arxiv만 켜둬" "GitHub 토큰 등록할게: ghp_..."

The host Claude picks the right trends_set_* tool and confirms what changed. After any change, restart Claude Desktop (or run pkill -f trends_mcp in a terminal) — the MCP server reads the SETTINGS block at spawn time.

ToolPurpose
trends_get_configShow current sources, categories, query, and which tokens are set (values never returned)
trends_set_enabled_sources(sources)Enable a subset; ["*"] or ["all"] for all
trends_set_arxiv_categories(categories)["cs.LG:5", "cs.HC:3"]-style list
trends_set_pubmed_query(query)PubMed syntax (MeSH, [Title/Abstract] tags)
trends_set_token(provider, value)provider ∈ {github, hf, ncbi, openfda}; empty value clears

Tokens: trends-mcp only does read operations, so create tokens with minimal scope — for GitHub, no scope at all (just authentication for rate limit). Don't put a repo-scoped PAT here; it'd be over-permission.

Presets

# AI/ML researcher (default)
TRENDS_ARXIV_CATEGORIES = "cs.LG:5,cs.CV:3,cs.CL:3,cs.AI:2"

# Medical imaging / clinical AI
TRENDS_ARXIV_CATEGORIES = "eess.IV:5,cs.CV:3,cs.HC:2,q-bio.QM:2"

# Robotics
TRENDS_ARXIV_CATEGORIES = "cs.RO:5,cs.AI:3,cs.LG:2,cs.CV:2"

# HCI / UX
TRENDS_ARXIV_CATEGORIES = "cs.HC:5,cs.CY:3,cs.AI:2,cs.SI:2"

# Security
TRENDS_ARXIV_CATEGORIES = "cs.CR:5,cs.LG:2,cs.NI:2"

# Computational biology
TRENDS_ARXIV_CATEGORIES = "q-bio.QM:4,q-bio.GN:3,q-bio.BM:3,stat.AP:2"

Common arXiv categories (full reference: ARXIV_CATEGORIES.md):

CodeFieldWeekly papers (approx)
cs.AIArtificial Intelligence500–800
cs.LGMachine Learning1,500–2,000 (largest)
cs.CVComputer Vision1,000–1,500
cs.CLNLP500–800
cs.HCHCI / UX50–100
cs.RORobotics100–200
cs.CRSecurity~200
eess.IVImage/Video Processing (medical imaging)100–200
q-bio.QMQuantitative biology50–100

Source allowlist

TRENDS_ENABLED_SOURCES = "arxiv,github,huggingface,paperswithcode"
# → fda_510k, fda_recalls, pubmed tools won't appear in the tool list at all

Empty / "*" / "all" = enable everything. Disabled sources don't register their tools, so the chat tool list itself shrinks. trends_digest and trends_briefing remain registered and skip disabled sources gracefully.


Tools

ToolPurpose
arxiv_recentRecent papers in one category, by submission date
arxiv_searchKeyword / field-syntax search (ti:, au:, abs:, cat:)
pubmed_searchPubMed search (MeSH terms, field tags) — abstracts via efetch
paperswithcode_trendingHF Daily Papers, sorted by community upvotes
github_trendingBrowse github.com/trending (HTML scrape)
github_searchGitHub Search API; days filters by created:
huggingface_trendingHF Hub models / datasets / spaces
fda_510k_recentRecent FDA 510(k) clearances
fda_recalls_recentRecent FDA medical-device recalls (class filter)
trends_digestMulti-source bullet-list digest, given a topic
trends_briefingMulti-source newspaper briefing; topic optional
trends_get_configShow current settings (token values never returned)
trends_set_enabled_sourcesToggle which sources are active
trends_set_arxiv_categoriesSet arXiv categories + per-category weights
trends_set_pubmed_querySet the default PubMed query
trends_set_tokenSet / clear a rate-limit booster token

All search tools accept days=N for recent-N-days filtering. trends_briefing groups results into 🎓 Research / 💻 Code & Models / 🏥 Regulatory sections. The five trends_*_config / trends_set_* tools are configuration mirrors of configure.py — see Configuration § C.

trends_digest vs trends_briefing

trends_digesttrends_briefing
Topicrequiredoptional ("what's new" mode)
Source rangeconfigurable subset (default 4)all enabled sources
Formatbullet-list digestgrouped newspaper format
Use casetopic deep-diveregular weekly briefing

Caching

Per-process in-memory TTL cache wraps every HTTP response. Concurrent identical requests are coalesced via per-key asyncio.Lock — N parallel callers fire one upstream request.

TTL groupLengthTools
Trending5 mingithub_trending, paperswithcode_trending, huggingface_trending (trending sort), github_search (with days)
Default10 minarxiv_recent, arxiv_search, github_search, huggingface_trending (other sorts)
Static1 hourpubmed_search, fda_510k_recent, fda_recalls_recent

Up to 256 entries; oldest evicted when full. No way to disable — TTLs are short enough that staleness is bounded.


Known limitations

  • GitHub Trending is HTML scraping — no official API exists. Layout changes can break it. Stable trending substitute: github_search with days=7 and sort=stars.
  • HF trendingScore is undocumented. API surface may change.
  • HF Daily Papers covers ~50 curated papers/day, not all of arXiv. It's a "what was talked about" feed, not exhaustive.
  • arXiv has no native trending — we approximate via category-balanced recent-submissions feeds.
  • openFDA classification field sometimes returns None even on recently classified recalls (upstream data lag). Search index lags too.

Roadmap (TODO)

  • v0.2: i18n for the TUI menu and briefing section headers
  • bioRxiv / medRxiv via RSS
  • Semantic Scholar (citation graph)
  • openFDA Adverse Events (MAUDE)
  • EU EUDAMED scraping
  • PMDA (Japan medical devices)
  • MFDS (Korea medical devices)
  • Mock-based test suite for CI

License

MIT

Servidores relacionados

NotebookLM Web Importer

Importa páginas web y videos de YouTube a NotebookLM con un clic. Utilizado por más de 200,000 usuarios.

Instalar extensión de Chrome