Adaptive Graph of Thoughts

An intelligent scientific reasoning framework that uses graph structures and Neo4j to perform advanced reasoning via the Model Context Protocol (MCP).

MseeP.ai Security Assessment Badge

🧠 Adaptive Graph of Thoughts

Version Python License Docker FastAPI NetworkX Last Updated smithery badge Codacy Security Scan CodeQL Advanced Dependabot Updates Verified on MseeP

πŸš€ Next-Generation AI Reasoning Framework for Scientific Research

Leveraging graph structures to transform how AI systems approach scientific reasoning


πŸ” Overview

Adaptive Graph of Thoughts (AGoT) is a high-performance MCP server that implements the Advanced Scientific Reasoning Graph-of-Thoughts (ASR-GoT) framework. It uses a Neo4j graph database as a dynamic knowledge store and exposes reasoning capabilities through the Model Context Protocol (MCP), enabling seamless integration with AI assistants like Claude Desktop.

Key Highlights

FeatureDescription
🧠 Graph-Based ReasoningMulti-stage pipeline with 8 specialized reasoning stages
πŸ“Š Dynamic Confidence ScoringMulti-dimensional evaluation with uncertainty quantification
πŸ”¬ Evidence IntegrationReal-time connection to PubMed, Google Scholar & Exa Search
⚑ High PerformanceAsync FastAPI with Neo4j graph operations
πŸ”Œ MCP ProtocolNative Claude Desktop & VS Code integration
🐳 Cloud-ReadyFull Docker + Kubernetes (Helm) support

πŸ—οΈ System Architecture

%%{init: {'theme': 'base', 'themeVariables': {'primaryColor': '#4A90D9', 'primaryTextColor': '#fff', 'primaryBorderColor': '#2C5F8A', 'lineColor': '#666', 'secondaryColor': '#52B788', 'tertiaryColor': '#F8F9FA'}}}%%
graph TB
    subgraph Clients["πŸ–₯️ Client Layer"]
        CD["πŸ€– Claude Desktop"]
        VS["πŸ’» VS Code / Cursor"]
        CC["πŸ”— Custom MCP Clients"]
    end

    subgraph Gateway["🌐 API Gateway Layer"]
        MCP_EP["⚑ MCP Endpoint\n/mcp"]
        NLQ_EP["πŸ” NLQ Endpoint\n/nlq"]
        GE_EP["πŸ“Š Graph Explorer\n/graph"]
        HE["πŸ’š Health Check\n/health"]
    end

    subgraph Core["🧠 Core Application Layer"]
        direction TB
        GTP["πŸ”„ GoT Processor\nOrchestrator"]
        subgraph Pipeline["ASR-GoT 8-Stage Pipeline"]
            S1["1️⃣ Init &\nContext Setup"]
            S2["2️⃣ Query\nDecomposition"]
            S3["3️⃣ Hypothesis\nGeneration"]
            S4["4️⃣ Evidence\nIntegration"]
            S5["5️⃣ Pruning &\nMerging"]
            S6["6️⃣ Subgraph\nExtraction"]
            S7["7️⃣ Synthesis &\nComposition"]
            S8["8️⃣ Reflection &\nAudit"]
            S1 --> S2 --> S3 --> S4 --> S5 --> S6 --> S7 --> S8
        end
        GTP --> Pipeline
    end

    subgraph Services["πŸ› οΈ Service Layer"]
        LLM["πŸ€– LLM Service\nOpenAI / Claude"]
        EDB["πŸ“š Evidence DB\nPubMed Β· Scholar Β· Exa"]
    end

    subgraph Storage["πŸ—„οΈ Storage Layer"]
        NEO4J["πŸ“¦ Neo4j\nGraph Database"]
        CONFIG["βš™οΈ Config\n(YAML + ENV)"]
    end

    Clients -->|"MCP JSON-RPC\nBearer Auth"| Gateway
    MCP_EP --> GTP
    NLQ_EP --> LLM
    GE_EP --> NEO4J
    GTP --> Services
    GTP --> NEO4J
    LLM --> EDB

    style Clients fill:#E3F2FD,stroke:#1565C0
    style Gateway fill:#F3E5F5,stroke:#6A1B9A
    style Core fill:#E8F5E9,stroke:#1B5E20
    style Services fill:#FFF8E1,stroke:#F57F17
    style Storage fill:#FCE4EC,stroke:#880E4F

πŸ”„ ASR-GoT Reasoning Pipeline

The 8-stage reasoning pipeline transforms a raw question into a comprehensive, evidence-backed answer stored in the knowledge graph:

%%{init: {'theme': 'base', 'themeVariables': {'primaryColor': '#7B68EE', 'edgeLabelBackground': '#fff'}}}%%
flowchart LR
    Q([❓ Scientific\nQuestion]) --> S1

    subgraph S1["Stage 1: Initialization"]
        I1["Set context\n& parameters"]
        I2["Create root\ngraph node"]
        I1 --> I2
    end

    subgraph S2["Stage 2: Decomposition"]
        D1["Identify\nsub-questions"]
        D2["Map knowledge\ndomains"]
        D1 --> D2
    end

    subgraph S3["Stage 3: Hypothesis"]
        H1["Generate\nhypotheses"]
        H2["Score initial\nconfidence"]
        H1 --> H2
    end

    subgraph S4["Stage 4: Evidence"]
        E1["Query PubMed\nScholar Β· Exa"]
        E2["Integrate\nevidence nodes"]
        E1 --> E2
    end

    subgraph S5["Stage 5: Pruning"]
        P1["Remove weak\nhypotheses"]
        P2["Merge related\nnodes"]
        P1 --> P2
    end

    subgraph S6["Stage 6: Subgraph"]
        SG1["Extract key\nsubgraphs"]
        SG2["Score relevance\n& centrality"]
        SG1 --> SG2
    end

    subgraph S7["Stage 7: Synthesis"]
        C1["Compose final\nnarrative"]
        C2["Build\nconclusions"]
        C1 --> C2
    end

    subgraph S8["Stage 8: Reflection"]
        R1["Audit\nconsistency"]
        R2["Return final\nresult"]
        R1 --> R2
    end

    S1 --> S2 --> S3 --> S4 --> S5 --> S6 --> S7 --> S8
    S8 --> A([βœ… Reasoned\nAnswer])

    style Q fill:#FFD700,stroke:#DAA520,color:#000
    style A fill:#90EE90,stroke:#228B22,color:#000

πŸ•ΈοΈ Knowledge Graph Connectome

The Neo4j knowledge graph captures the reasoning structure as a rich connectome β€” nodes represent concepts, hypotheses, and evidence, while edges represent semantic and logical relationships:

%%{init: {'theme': 'base', 'themeVariables': {'primaryColor': '#6C63FF', 'primaryTextColor': '#fff', 'edgeLabelBackground': '#f0f0ff'}}}%%
graph TD
    RootQuery["πŸ” Root Query\n(Session Node)"]

    subgraph Decomp["πŸ“ Decomposition Layer"]
        D1["πŸ“Œ Sub-question A\n[domain: biology]"]
        D2["πŸ“Œ Sub-question B\n[domain: chemistry]"]
        D3["πŸ“Œ Sub-question C\n[domain: physics]"]
    end

    subgraph Hypo["πŸ’‘ Hypothesis Layer"]
        H1["πŸ’­ Hypothesis 1\nconf: 0.85"]
        H2["πŸ’­ Hypothesis 2\nconf: 0.72"]
        H3["πŸ’­ Hypothesis 3\nconf: 0.61"]
        H4["πŸ’­ Hypothesis 4\nconf: 0.90"]
    end

    subgraph Evid["πŸ”¬ Evidence Layer"]
        E1["πŸ“„ PubMed Paper\nPMID: 38492"]
        E2["πŸ“„ Scholar Article\nDOI: 10.1038/..."]
        E3["🌐 Exa Result\nexpert consensus"]
        E4["πŸ“Š Statistical\nMeta-analysis"]
    end

    subgraph Synth["🎯 Synthesis Layer"]
        C1["βœ… Merged\nConclusion A"]
        C2["βœ… Merged\nConclusion B"]
        FINAL["πŸ† Final Answer\n[confidence: 0.88]"]
    end

    RootQuery -->|"DECOMPOSES_TO"| D1
    RootQuery -->|"DECOMPOSES_TO"| D2
    RootQuery -->|"DECOMPOSES_TO"| D3

    D1 -->|"GENERATES"| H1
    D1 -->|"GENERATES"| H2
    D2 -->|"GENERATES"| H3
    D3 -->|"GENERATES"| H4

    H1 -->|"SUPPORTED_BY"| E1
    H2 -->|"SUPPORTED_BY"| E2
    H3 -->|"CONTRADICTED_BY"| E3
    H4 -->|"SUPPORTED_BY"| E4

    H1 -->|"MERGES_WITH"| C1
    H4 -->|"MERGES_WITH"| C1
    H2 -->|"MERGES_WITH"| C2
    C1 -->|"SYNTHESIZES_TO"| FINAL
    C2 -->|"SYNTHESIZES_TO"| FINAL

    style RootQuery fill:#4A90D9,color:#fff,stroke:#2C5F8A
    style FINAL fill:#27AE60,color:#fff,stroke:#1E8449
    style H3 fill:#E74C3C,color:#fff,stroke:#C0392B

πŸ” Request Flow

%%{init: {'theme': 'base'}}%%
sequenceDiagram
    actor User as πŸ€– Claude / MCP Client
    participant API as ⚑ FastAPI Server
    participant Auth as πŸ” Auth Middleware
    participant GTP as 🧠 GoT Processor
    participant NEO as πŸ“¦ Neo4j DB
    participant LLM as πŸ€– LLM Service

    User->>API: POST /mcp {"method": "asr_got.query"}
    API->>Auth: Verify Bearer Token
    Auth-->>API: βœ… Authorized

    API->>GTP: Process query
    GTP->>NEO: Create session + root node

    loop 8 Pipeline Stages
        GTP->>LLM: Generate hypotheses / summaries
        LLM-->>GTP: LLM response
        GTP->>NEO: Write nodes & relationships
        NEO-->>GTP: Confirmed
    end

    GTP->>NEO: Extract final subgraph
    NEO-->>GTP: Final answer graph
    GTP-->>API: Structured result

    API-->>User: JSON-RPC response\n{result, confidence, graph_state}

πŸ“š Documentation

Full documentation including API reference, configuration guide, and contribution guidelines:

➑️ Adaptive Graph of Thoughts Documentation Site

πŸ“‚ Project Structure

Adaptive-Graph-of-Thoughts-MCP-server/
β”œβ”€β”€ πŸ“ .github/             # CI/CD workflows (CodeQL, Dependabot)
β”œβ”€β”€ πŸ“ agt_setup/           # Interactive setup wizard CLI
β”œβ”€β”€ πŸ“ config/              # settings.yaml configuration
β”œβ”€β”€ πŸ“ docs_src/            # MkDocs documentation source
β”œβ”€β”€ πŸ“ helm/                # Kubernetes Helm chart
β”œβ”€β”€ πŸ“ src/
β”‚   └── πŸ“ adaptive_graph_of_thoughts/
β”‚       β”œβ”€β”€ πŸ“ api/         # FastAPI routes & schemas
β”‚       β”œβ”€β”€ πŸ“ application/ # GoTProcessor orchestrator
β”‚       β”œβ”€β”€ πŸ“ domain/      # 8-stage pipeline & models
β”‚       β”œβ”€β”€ πŸ“ infrastructure/ # Neo4j utilities
β”‚       └── πŸ“ services/    # LLM & external API clients
β”œβ”€β”€ πŸ“ tests/               # Comprehensive test suite
β”œβ”€β”€ Dockerfile
β”œβ”€β”€ docker-compose.yml
β”œβ”€β”€ pyproject.toml
└── README.md

πŸš€ Quick Start

git clone https://github.com/SaptaDey/Adaptive-Graph-of-Thoughts-MCP-server.git
cd Adaptive-Graph-of-Thoughts-MCP-server
poetry install
poetry run python -m agt_setup   # Interactive credential setup wizard
poetry run uvicorn adaptive_graph_of_thoughts.main:app --reload

Visit http://localhost:8000/docs for the interactive API documentation.

πŸš€ Getting Started

Deployment Prerequisites

Before running Adaptive Graph of Thoughts (either locally or via Docker if not using the provided docker-compose.prod.yml which includes Neo4j), ensure you have:

  • A running Neo4j Instance: Adaptive Graph of Thoughts requires a connection to a Neo4j graph database.

    • APOC Library: Crucially, the Neo4j instance must have the APOC (Awesome Procedures On Cypher) library installed. Several Cypher queries within the application's reasoning stages utilize APOC procedures (e.g., apoc.create.addLabels, apoc.merge.node). Without APOC, the application will not function correctly. You can find installation instructions on the official APOC website.
    • Configuration: Ensure that your config/settings.yaml (or corresponding environment variables) correctly points to your Neo4j instance URI, username, and password.
    • Indexing: For optimal performance, ensure appropriate Neo4j indexes are created. You can run python scripts/run_cypher_migrations.py to apply the provided Cypher migrations automatically. See Neo4j Indexing Strategy for details.

    Note: The provided docker-compose.yml (for development) and docker-compose.prod.yml (for production) already include a Neo4j service with the APOC library pre-configured, satisfying this requirement when using Docker Compose.

Prerequisites

  • Python 3.11+ (as specified in pyproject.toml, e.g., the Docker image uses Python 3.11.x or 3.12.x, 3.13.x)
  • Poetry: For dependency management
  • Docker and Docker Compose: For containerized deployment

Installation and Setup (Local Development)

  1. Clone the repository:

    git clone https://github.com/SaptaDey/Adaptive-Graph-of-Thoughts-MCP-server.git
    cd Adaptive-Graph-of-Thoughts-MCP-server
    
  2. Install dependencies using Poetry:

    poetry install
    

    This creates a virtual environment and installs all necessary packages specified in pyproject.toml.

  3. Activate the virtual environment:

    poetry shell
    
  4. Configure the application:

    # Copy example configuration
    cp config/settings.example.yaml config/settings.yaml
    
    # Edit configuration as needed
    vim config/settings.yaml
    
  5. Set up environment variables (optional):

    # Create .env file for sensitive configuration
    echo "LOG_LEVEL=DEBUG" > .env
    echo "API_HOST=0.0.0.0" >> .env
    echo "API_PORT=8000" >> .env
    

Secret Management

In production environments, set the SECRETS_PROVIDER environment variable to aws, gcp, or vault to fetch sensitive values from a supported secrets manager. Optionally provide <VAR>_SECRET_NAME variables (for example OPENAI_API_KEY_SECRET_NAME) to control the name of each secret. When a secrets provider is configured, values for OPENAI_API_KEY, ANTHROPIC_API_KEY, and NEO4J_PASSWORD are loaded automatically at startup.

  1. Run the development server:

    python src/adaptive_graph_of_thoughts/main.py
    

    Alternatively, for more control:

    uvicorn adaptive_graph_of_thoughts.main:app --reload --host 0.0.0.0 --port 8000
    

    The API will be available at http://localhost:8000.

✨ Setup Wizard

An interactive wizard is available to streamline initial configuration.

poetry run python -m agt_setup

Then visit http://localhost:8000/setup to complete the web-based steps.

Setup wizard demo GIF will appear here in the full documentation.

Docker Deployment

graph TB
    subgraph "Development Environment"
        A[πŸ‘¨β€πŸ’» Developer] --> B[🐳 Docker Compose]
    end
    
    subgraph "Container Orchestration"
        B --> C[πŸ“¦ Adaptive Graph of Thoughts Container]
        B --> D[πŸ“Š Monitoring Container]
        B --> E[πŸ—„οΈ Database Container]
    end
    
    subgraph "Adaptive Graph of Thoughts Application"
        C --> F[⚑ FastAPI Server]
        F --> G[🧠 ASR-GoT Engine]
        F --> H[πŸ”Œ MCP Protocol]
    end
    
    subgraph "External Integrations"
        H --> I[πŸ€– Claude Desktop]
        H --> J[πŸ”— Other AI Clients]
    end
    
    style A fill:#e1f5fe
    style B fill:#f3e5f5
    style C fill:#e8f5e8
    style F fill:#fff3e0
    style G fill:#ffebee
    style H fill:#f1f8e9
  1. Quick Start with Docker Compose:

    # Build and run all services
    docker-compose up --build
    
    # For detached mode (background)
    docker-compose up --build -d
    
    # View logs
    docker-compose logs -f adaptive-graph-of-thoughts
    
  2. Individual Docker Container:

    # Build the image
    docker build -t adaptive-graph-of-thoughts:latest .
    
    # Run the container
    docker run -p 8000:8000 -v $(pwd)/config:/app/config adaptive-graph-of-thoughts:latest
    
  3. Production Deployment:

    # Use production compose file
    docker-compose -f docker-compose.prod.yml up --build -d
    

Kubernetes Deployment (Helm)

A minimal Helm chart is provided under helm/agot-server for running Adaptive Graph of Thoughts on a Kubernetes cluster.

helm install agot helm/agot-server

Customize values in helm/agot-server/values.yaml to set the image repository, resource limits, and other options.

Notes on Specific Deployment Platforms

  • Smithery.ai: Deploy using the included smithery.yaml.
    • Connect your GitHub repository on Smithery and click Deploy.
    • The container listens on the PORT environment variable (default 8000).
    • Health Checks rely on the /health endpoint.
    • The Dockerfile and docker-compose.prod.yml illustrate the container setup.
  1. Access the Services:
    • API Documentation: http://localhost:8000/docs
    • Health Check: http://localhost:8000/health
    • MCP Endpoint: http://localhost:8000/mcp

πŸ”Œ MCP Client Integration

Supported MCP Clients

Adaptive Graph of Thoughts supports integration with various MCP clients:

  • Claude Desktop - Full STDIO and HTTP support
  • VS Code - Via MCP extensions
  • Custom MCP Clients - Generic configuration available

Quick Client Setup

Claude Desktop / VS Code settings

{
  "mcpServers": {
    "adaptive-graph-of-thoughts": {
      "command": "python",
      "args": ["-m", "adaptive_graph_of_thoughts.main"],
      "cwd": "/path/to/Adaptive-Graph-of-Thoughts-MCP-server",
      "env": {
        "NEO4J_URI": "bolt://localhost:7687",
        "NEO4J_USER": "neo4j",
        "NEO4J_PASSWORD": "your_password",
        "MCP_TRANSPORT_TYPE": "stdio"
      }
    }
  }
}

Available MCP Tools

ToolDescription
scientific_reasoning_queryAdvanced scientific reasoning with graph analysis
analyze_research_hypothesisHypothesis evaluation with confidence scoring
explore_scientific_relationshipsConcept relationship mapping
validate_scientific_claimsEvidence-based claim validation

πŸ”Œ API Endpoints

The primary API endpoints exposed by Adaptive Graph of Thoughts are:

  • MCP Protocol Endpoint: POST /mcp

    • This endpoint is used for communication with MCP clients like Claude Desktop.
    • Example Request for the asr_got.query method:
      {
        "jsonrpc": "2.0",
        "method": "asr_got.query",
        "params": {
          "query": "Analyze the relationship between microbiome diversity and cancer progression.",
          "parameters": {
            "include_reasoning_trace": true,
            "include_graph_state": false
          }
        },
        "id": "123"
      }
      
    • Other supported MCP methods include initialize and shutdown.
  • Health Check Endpoint: GET /health

    • Provides a simple health status of the application.
    • Example Response:
      {
        "status": "healthy",
        "version": "0.1.0" 
      }
      

The advanced API endpoints previously listed (e.g., /api/v1/graph/query) are not implemented in the current version and are reserved for potential future development.

Session Handling (session_id)

Currently, the session_id parameter available in API requests (e.g., for asr_got.query) and present in responses serves primarily to identify and track a single, complete query-response cycle. It is also used for correlating progress notifications (like got/queryProgress) with the originating query.

While the system generates and utilizes session_ids, Adaptive Graph of Thoughts does not currently support true multi-turn conversational continuity where the detailed graph state or reasoning context from a previous query is automatically loaded and reused for a follow-up query using the same session_id. Each query is processed independently at this time.

Future Enhancement: Persistent Sessions

A potential future enhancement for Adaptive Graph of Thoughts is the implementation of persistent sessions. This would enable more interactive and evolving reasoning processes by allowing users to:

  1. Persist State: Store the generated graph state and relevant reasoning context from a query, associated with its session_id, likely within the Neo4j database.
  2. Reload State: When a new query is submitted with an existing session_id, the system could reload this saved state as the starting point for further processing.
  3. Refine and Extend: Allow the new query to interact with the loaded graphβ€”for example, by refining previous hypotheses, adding new evidence to existing structures, or exploring alternative reasoning paths based on the established context.

Implementing persistent sessions would involve developing robust strategies for:

  • Efficiently storing and retrieving session-specific graph data in Neo4j.
  • Managing the lifecycle (e.g., creation, update, expiration) of session data.
  • Designing sophisticated logic for how new queries merge with, modify, or extend pre-existing session contexts and graphs.

This is a significant feature that could greatly enhance the interactive capabilities of Adaptive Graph of Thoughts. Contributions from the community in designing and implementing persistent session functionality are welcome.

Future Enhancement: Asynchronous and Parallel Stage Execution

Currently, the 8 stages of the Adaptive Graph of Thoughts reasoning pipeline are executed sequentially. For complex queries or to further optimize performance, exploring asynchronous or parallel execution for certain parts of the pipeline is a potential future enhancement.

Potential Areas for Parallelism:

  • Hypothesis Generation: The HypothesisStage generates hypotheses for each dimension identified by the DecompositionStage. The process of generating hypotheses for different, independent dimensions could potentially be parallelized. For instance, if three dimensions are decomposed, three parallel tasks could work on generating hypotheses for each respective dimension.
  • Evidence Integration (Partial): Within the EvidenceStage, if multiple hypotheses are selected for evaluation, the "plan execution" phase (simulated evidence gathering) for these different hypotheses might be performed concurrently.

Challenges and Considerations:

Implementing parallel stage execution would introduce complexities that need careful management:

  • Data Consistency: Concurrent operations, especially writes to the Neo4j database (e.g., creating multiple hypothesis nodes or evidence nodes simultaneously), must be handled carefully to ensure data integrity and avoid race conditions. Unique ID generation schemes would need to be robust for parallel execution.
  • Transaction Management: Neo4j transactions for concurrent writes would need to be managed appropriately.
  • Dependency Management: Ensuring that stages (or parts of stages) that truly depend on the output of others are correctly sequenced would be critical.
  • Resource Utilization: Parallel execution could increase resource demands (CPU, memory, database connections).
  • Complexity: The overall control flow of the GoTProcessor would become more complex.

While the current sequential execution ensures a clear and manageable data flow, targeted parallelism in areas like hypothesis generation for independent dimensions could offer performance benefits for future versions of Adaptive Graph of Thoughts. This remains an open area for research and development.

πŸ§ͺ Testing & Quality Assurance

Development Commands

Continuous integration pipelines on GitHub Actions run tests, CodeQL analysis, and Microsoft Defender for DevOps security scans.

# Run full test suite with coverage using Poetry
poetry run pytest --cov=src --cov-report=html --cov-report=term

# Or using Makefile for the default test run
make test

# Run specific test categories (using poetry)
poetry run pytest tests/unit/stages/          # Stage-specific tests
poetry run pytest tests/integration/         # Integration tests
poetry run pytest -k "test_confidence"       # Tests matching pattern

# Type checking and linting (can also be run via Makefile targets: make lint, make check-types)
poetry run mypy src/ --strict                # Strict type checking
poetry run ruff check . --fix                # Auto-fix linting issues
poetry run ruff format .                     # Format code

# Pre-commit hooks (recommended)
poetry run pre-commit install                # Install hooks
poetry run pre-commit run --all-files       # Run all hooks (runs Ruff and MyPy)

# See Makefile for other useful targets like 'make all-checks'.

πŸ–₯ Dashboard Tour

Dashboard demo GIF coming soon.

πŸ’» IDE Integration

Use the vscode-agot extension to query the server from VS Code. Run the extension and execute AGoT: Ask Graph… from the Command Palette.

❓ Troubleshooting

If the server fails to start or setup reports errors, ensure your Neo4j instance is running and the credentials in .env are correct. Consult the console output for details.

πŸ—ΊοΈ Roadmap and Future Directions

We have an exciting vision for the future of Adaptive Graph of Thoughts! Our roadmap includes plans for enhanced graph visualization, integration with more data sources like Arxiv, and further refinements to the core reasoning engine.

For more details on our planned features and long-term goals, please see our Roadmap (also available on the documentation site).

🀝 Contributing

We welcome contributions! Please see our Contributing Guidelines (also available on the documentation site) for details on how to get started, our branching strategy, code style, and more.

πŸ“„ License

This project is licensed under the Apache License 2.0. License.

πŸ”’ Security

Please see our Security Policy for reporting vulnerabilities and details on supported versions.

πŸ™ Acknowledgments

  • NetworkX community for graph analysis capabilities
  • FastAPI team for the excellent web framework
  • Pydantic for robust data validation
  • The scientific research community for inspiration and feedback

Related Servers