SEC Filings and Earnings Call

The MCP server provides end-to-end workflows for SEC filings and earnings call transcripts—including ticker resolution, document retrieval, OCR, embedding, on-disk resource discovery, and semantic search—exposed via MCP and powered by the same olmOCR and embedding backends as the vLLM backends.

SEC-filings-Markdown

Configuration

Settings are loaded via Pydantic Settings from environment variables or a .env file:

VariableDescriptionDefault
SEC_API_ORGANIZATIONOrganization name for SEC API User-AgentYour-Organization
SEC_API_EMAILContact email for SEC API User-Agent[email protected]
OLMOCR_SERVERvLLM server URL for olmOCRhttp://localhost:8000/v1
OLMOCR_MODELModel name for olmOCRallenai/olmOCR-2-7B-1025-FP8
OLMOCR_WORKSPACEWorkspace directory for OCR output./localworkspace
EARNINGS_TRANSCRIPTS_DIRDirectory for fetched transcript Markdown filesearnings_transcripts_data
EMBEDDING_SERVEROpenAI-compatible embedding API (e.g. vLLM pooling)http://127.0.0.1:8888/v1
EMBEDDING_MODELModel id passed to the embedding serverQwen/Qwen3-Embedding-0.6B
CHROMA_PERSIST_DIRChromaDB persistence directory./chroma_db
MCP_HOSTBind address for the MCP HTTP server127.0.0.1
MCP_PORTListen port for the MCP HTTP server8069
MCP_NGROK_ALLOWED_HOSTSJSON list of extra Host values allowed through the tunnel (see MCP section)(see finance_data/settings.py)

MCP server

mcp_server.py exposes SEC filing and earnings-transcript workflows over MCP (fetch/OCR, embed, semantic search) using the same backends as the REST API: olmOCR and an OpenAI-compatible embedding endpoint backed by vLLM.

1. Start the vLLM backends

The MCP tools need both servers running before you start the MCP process.

Terminal A — olmOCR (vision / markdown pipeline) — must match OLMOCR_SERVER (default http://localhost:8000/v1):

make vllm-olmocr-serve

Terminal B — embeddings (pooling runner) — must match EMBEDDING_SERVER (default http://127.0.0.1:8888/v1):

make vllm-embd-serve

If you change PORT / EMBD_PORT in the Makefile or your environment, set OLMOCR_SERVER and EMBEDDING_SERVER in .env so they point at the same hosts and ports.

2. Install dependencies and run the MCP server

Chroma, OpenAI client, and OCR-related imports require the ocr-md group in addition to mcp:

uv sync --group ocr-md --group mcp
uv run --group ocr-md --group mcp python mcp_server.py

The server listens on MCP_HOST / MCP_PORT (defaults 127.0.0.1:8069) using the streamable HTTP transport. The HTTP endpoint path is /mcp (FastMCP default), so locally that is http://127.0.0.1:8069/mcp.

3. Expose with ngrok and connect a client

To use the MCP server from another machine or from a hosted MCP client, tunnel the MCP port with ngrok (or a similar HTTPS reverse proxy).

  1. Install and log in to ngrok (ngrok config add-authtoken …).

  2. With mcp_server.py still running, forward the MCP port (replace 8069 if you changed MCP_PORT):

    ngrok http 8069
    
  3. Note the public HTTPS hostname ngrok assigns (for example https://random-name.ngrok-free.app or *.ngrok-free.dev).

  4. Add that hostname to MCP_NGROK_ALLOWED_HOSTS so DNS rebinding protection accepts the tunnel’s Host header. In .env, use a JSON array, for example:

    MCP_NGROK_ALLOWED_HOSTS='["random-name.ngrok-free.app"]'
    

    Restart mcp_server.py after changing this.

  5. Point your MCP client at the tunneled URL including /mcp, for example:

    https://random-name.ngrok-free.app/mcp

Use your client’s documented configuration for Streamable HTTP / URL-based MCP servers. If the tunnel hostname changes each time you run ngrok, update MCP_NGROK_ALLOWED_HOSTS and restart the MCP process.

Tools and resources

Tools (representative):

  • company_name_to_ticker_tool, list_resources_tool
  • sec_main_to_markdown_and_embed_tool, earnings_transcript_for_quarter_tool
  • search_sec_filings_tool, search_transcripts_tool

For an interactive walkthrough of how to use the MCP, open this ChatGPT chats.

Resources (URI catalogs under resource://sec-filings-data/...): combined SEC + transcript file listings and per-root trees.

Docker

Build

docker build -t sec-filings-md .

The image now defaults to a smaller footprint by using the CUDA runtime base while still preinstalling Playwright Chromium for scraping. If you want to skip Playwright browser installation (to reduce image size further), build with:

docker build --build-arg INSTALL_PLAYWRIGHT_BROWSER=0 -t sec-filings-md .

Or via Makefile:

make docker-build

Run

GPU_DEVICE=${GPU_DEVICE:-3}
docker run --gpus device=${GPU_DEVICE} \
  -e SEC_API_ORGANIZATION="Your-Organization" \
  -e SEC_API_EMAIL="[email protected]" \
  -v ./sec_data:/app/sec_data \
  -v ./localworkspace:/app/localworkspace \
  -p 8081:8081 \
  sec-filings-md

Or via Makefile (build + run in one step):

make docker-start

Makefile overrides:

VariableDescriptionDefault
IMAGE_NAMEDocker image namesec-filings-md
GPU_DEVICEGPU device index0
API_PORTHost port for API8081
SEC_API_ORGANIZATIONSEC API User-Agent orgYour-Organization
SEC_API_EMAILSEC API contact email[email protected]

Example with overrides:

make docker-start GPU_DEVICE=3 SEC_API_EMAIL="[email protected]"

The two volumes persist data across container restarts:

VolumeContainer pathPurpose
sec_data/app/sec_dataDownloaded SEC filing PDFs
localworkspace/app/localworkspaceOCR workspace and output markdown

Override the workspace path at runtime with -e OLMOCR_WORKSPACE=/custom/path.

Installation

uv sync
playwright install chromium

Install OCR/markdown + embedding stack dependencies when you need those pipelines:

uv sync --group ocr-md

Package install (for publishing/consuming from PyPI):

pip install finance_data_llm

Use package functions directly from Python (no server process required):

import asyncio

from finance_data.filings.sec_data import sec_main
from finance_data.filings.utils import company_to_ticker

ticker = company_to_ticker("Amazon") or "AMZN"
sec_result, pdf_path = asyncio.run(
    sec_main(ticker=ticker, year="2025", filing_type="10-K")
)

If you do want to run the API, use the packaged console script:

finance-data-llm-server

Usage

Start vLLM server:

make vllm-olmocr-serve

Benchmark vLLM with guidellm (start the vLLM server first, then in another terminal):

make guidellm-benchmark

Fetch SEC filings:

uv run python -m finance_data.filings.sec_data --ticker AMZN --year 2025

Run OCR pipeline:

uv run python -m finance_data.ocr.olmocr_pipeline --pdf-dir sec_data/AMZN-2025

Earnings call transcripts

Transcripts are scraped from discountingcashflows.com (Playwright + Chromium). Each quarter is saved as one Markdown file under {EARNINGS_TRANSCRIPTS_DIR}/{TICKER}/{year}/Q{n}_{YYYY-MM-DD}.md (date may be unknown-date when unavailable).

1. Fetch transcripts

CLI (writes files under earnings_transcripts_data by default):

uv run python -m finance_data.earnings_transcripts.transcripts AMZN 2025

Optional: --max-concurrency (default 4) to limit parallel quarter fetches.

HTTP (same fetch + persist, with the API running):

curl -s -X POST "http://127.0.0.1:8081/earnings_transcripts/for_year" \
  -H "Content-Type: application/json" \
  -d '{"ticker":"AMZN","year":2025}'

Response body is a JSON array of transcript objects (ticker, year, quarter_num, date, speaker_texts, …).

2. Start embedding server and API

Transcript chunks are embedded with the same OpenAI-compatible embedding endpoint as SEC filings (EMBEDDING_SERVER / EMBEDDING_MODEL). In one terminal:

make vllm-embd-serve

In another:

make start-server

(Adjust API_PORT / EMBD_PORT in the Makefile or your environment if needed.)

3. Index transcripts in Chroma

curl -s -X POST "http://127.0.0.1:8081/vector_store/embed_transcripts" \
  -H "Content-Type: application/json" \
  -d '{"ticker":"AMZN","year":"2025","force":false}'

Use "force": true to replace existing vectors for those quarters. Filing types in the index appear as Q1Q4.

4. Search across indexed quarters

Search merges hits from all transcript quarters present for that ticker/year:

curl -s -X POST "http://127.0.0.1:8081/vector_store/search_transcripts" \
  -H "Content-Type: application/json" \
  -d '{"ticker":"AMZN","year":"2025","query":"AWS revenue growth","top_k":5}'

Each result includes filing_type (Q1, …) so you can see which call the chunk came from.

Server Terkait