Rossum MCP & Agent

MCP server and AI agent toolkit for intelligent document processing with Rossum.

Rossum MCP Server & Rossum Agent

AI-powered Rossum orchestration: Document workflows conversationally, debug pipelines automatically, and configure automation through natural language.

Documentation API Reference Python License: MIT

PyPI - rossum-mcp PyPI - rossum-agent PyPI - rossum-agent-client

codecov CodeQL Snyk Security CodeFactor

MCP Fully Typed Rossum API Claude Opus 4.6

Conversational AI toolkit for the Rossum intelligent document processing platform. Transforms complex workflow setup, debugging, and configuration into natural language conversations through a Model Context Protocol (MCP) server and specialized AI agent.

[!IMPORTANT] This project has moved to a company-private GitLab for a major overhaul. This public repository is temporarily archived and will not receive updates during that period.

[!NOTE] This is not an official Rossum project. It is a community-developed integration built on top of the Rossum API, not a product (yet).

What Can You Do?

Example 1: Organization Setup

Set up a complete customer organization with queues, schemas, validations, duplicate detection, email notifications, and UI configuration:

1. Create two new queues: Invoices and Credit Notes.
2. Update schemas w.r.t. schema specification (Invoices with 15 fields including line items table, Credit Notes as-is)
3. Add a computed field "The Net Terms" to Invoices queue (Due Date - Issue Date → Net 15/30/Outstanding)
4. Implement duplicate document detection on Document ID
5. Add business validations: total amount cap, line items sum check, quantity × unit price check
6. Add email notification extension on document status change to 'to_review'
7. Update Invoice queue UI settings to display 8 key fields
8. Verify setup by uploading a sample invoice twice (testing duplicate detection)

What This Demonstrates:

  • Queue & Schema Setup: Creates queues with detailed field specifications including line items tables
  • Computed Fields: Adds derived fields with business logic (date difference categorization)
  • Duplicate Detection: Configures document-level deduplication with user-facing messages
  • Business Validations: Implements multi-rule validation (amount caps, sum checks, arithmetic checks)
  • Email Notifications: Sets up templated email alerts triggered by document state changes
  • UI Configuration: Customizes queue column display for operational efficiency
  • End-to-End Verification: Validates the entire setup with real document uploads

This example showcases the agent's ability to set up a production-ready organization from scratch - all from a single conversational prompt.

Example 2: Hook Analysis & Documentation

Automatically analyze and document all hooks/extensions configured on a queue:

Briefly explain the functionality of every hook based on description and/or code one by one for a queue `2042843`.

Store output in extension_explanation.md

What This Does:

  • Lists all hooks/extensions on the specified queue
  • Analyzes each hook's description and code
  • Generates clear, concise explanations of functionality
  • Documents trigger events and settings
  • Saves comprehensive documentation to a markdown file

This example shows how the agent can analyze existing automation to help teams understand their configured workflows.

Example 3: Aurora Splitting & Sorting Demo

Set up a complete document splitting and sorting pipeline with training queues, splitter engine, automated hooks, and intelligent routing:

1. Create three new queues in workspace `1777693` - Air Waybills, Certificates of Origin, Invoices.
2. Set up the schema with a single enum field on each queue with a name Document type (`document_type`).
3. Upload documents from folders air_waybill, certificate_of_origin, invoice in `examples/data/splitting_and_sorting/knowledge` to corresponding queues.
4. Annotate all uploaded documents with a correct Document type, and confirm the annotation.
    - Beware document types are air_waybill, invoice and certificate_of_origin (lower-case, underscores).
    - IMPORTANT: After confirming all annotations, double check, that all are confirmed/exported, and fix those that are not.
5. Create three new queues in workspace `1777693` - Air Waybills Test, Certificates of Origin Test, Invoices Test.
6. Set up the schema with a single enum field on each queue with a name Document type (`document_type`).
7. Create a new engine in organization `1`, with type = 'splitter'.
8. Configure engine training queues to be - Air Waybills, Certificates of Origin, Invoices.
    - DO NOT copy knowledge.
    - Update Engine object.
9. Create a new schema that will be the same as the schema from the queue `3885208`.
10. Create a new queue (with splitting UI feature flag!) with the created engine and schema in the same workspace called: Inbox.
11. Create a python function-based the **`Splitting & Sorting`** hook on the new inbox queue with this settings:
    **Functionality**: Automatically splits multi-document uploads into separate annotations and routes them to appropriate queues.
    Split documents should be routed to the following queues: Air Waybills Test, Certificates of Origin Test, Invoices Test

    **Trigger Events**:
    - annotation_content.initialize (suggests split to user)
    - annotation_content.confirm (performs actual split)
    - annotation_content.export (performs actual split)

    **How it works**: Python code

    **Settings**:
    - sorting_queues: Maps document types to target queue IDs for routing
    - max_blank_page_words: Threshold for blank page detection (pages with fewer words are considered blank)
12. Upload 10 documents from `examples/data/splitting_and_sorting/testing` folder to inbox queues.

What This Demonstrates:

  • Queue Orchestration: Creates 7 queues (3 training + 3 test + 1 inbox) with consistent schemas

  • Knowledge Warmup: Uploads and annotates 90 training documents to teach the engine

  • Splitter Engine: Configures an AI engine to detect document boundaries and types

  • Hook Automation: Sets up a sophisticated webhook that automatically:

    • Splits multi-document PDFs into individual annotations
    • Removes blank pages intelligently
    • Routes split documents to correct queues by type
    • Suggests splits on initialization and executes on confirmation
  • End-to-End Testing: Validates the entire pipeline with test documents

This example showcases the agent's ability to orchestrate complex workflows involving multiple queues, engines, schemas, automated hooks with custom logic, and intelligent document routing - all from a single conversational prompt.

Repository Structure

Core packages:

Supporting packages (used for development, deployment, and integration):

Quick Start

Prerequisites: Python 3.12+, uv, Rossum account with API credentials

git clone https://github.com/rossumai/rossum-agents.git
cd rossum-agents

# Install all packages with all features
uv sync --all-extras

# AWS Bedrock (the agent uses Claude via Bedrock)
export AWS_PROFILE="rossum-dev"
export AWS_REGION="eu-west-1"

# Start PostgreSQL (session storage) and Valkey (change tracking)
docker-compose up -d postgres valkey

# Run the agent REST API
uv run rossum-agent-api

The agent expects PostgreSQL on localhost:5432 and Valkey on localhost:6379 by default. Override via POSTGRES_* and VALKEY_* env vars (see CLAUDE.md).

Install the TUI (Fabry) from the local checkout to chat with the agent from your terminal:

# Build the TypeScript client first (TUI depends on it via file:)
cd rossum-agent-client-ts
npm install
npm run build

# Build and link the TUI
cd ../rossum-agent-tui
npm install
npm run build
npm link    # exposes `fabry` on your PATH

fabry \
  --api-url http://localhost:8000 \
  --token your-token \
  --rossum-url your-api-url

See rossum-agent-tui/README.md for flags, keybindings, and session persistence.

For individual package details, see rossum-mcp/README.md and rossum-agent/README.md. See CLAUDE.md for the full list of configuration options (AWS Bedrock, Valkey, logging, etc.).

Installation & Usage

MCP Server with Claude Desktop

Best for: Interactive use with Claude Desktop

Configure Claude Desktop (~/Library/Application Support/Claude/claude_desktop_config.json on Mac):

{
  "mcpServers": {
    "rossum": {
      "command": "uvx",
      "args": ["rossum-mcp"],
      "env": {
        "ROSSUM_API_TOKEN": "${ROSSUM_API_TOKEN}",
        "ROSSUM_API_BASE_URL": "${ROSSUM_API_BASE_URL}",
        "ROSSUM_MCP_MODE": "read-write"
      }
    }
  }
}

Or run standalone: rossum-mcp

Documentation

Resources

Development

# Install with all development dependencies
uv pip install -e rossum-mcp[all] -e rossum-agent[all]

# Run tests
pytest

# Run regression tests (validates agent behavior)
pytest regression_tests/ -v -s

# Lint and type check
pre-commit run --all-files

See regression_tests/README.md for the agent quality evaluation framework.

License

MIT License - see LICENSE for details.

Contributing

Contributions welcome! See individual package READMEs for development guidelines.

Serveurs connexes

NotebookLM Web Importer

Importez des pages web et des vidéos YouTube dans NotebookLM en un clic. Utilisé par plus de 200 000 utilisateurs.

Installer l'extension Chrome